Microprocessor based meter project

GroupDIY Audio Forum

Help Support GroupDIY Audio Forum:

This site may earn a commission from merchant affiliate links, including eBay, Amazon, and others.
joaquins said:
It might be a nice touch to implement to the final version, I don't like that misbehavior that goes up and down with a sine. I would end with a lower sample rate but that should be fine. Same thing if using two channels.
I intend to have a mode switch and a peak hold to have different display modes, only VU, only peak, bar dot, floating bar. It could also have two peak time responses.
In the console meter we offered a pushbutton set up that allowed Peak only, ave only, or both. I am not aware of anybody who didn't like both.
In C things are much shorter, compilers are efficient enough if you have care (like avoiding unnecessary divisions and floating point) The code takes less than 2kB as it is and only 27 bytes of global variables. I don't like to use local variables in this kind of code as I only have 2kB of memory and using functions with not defined calls might overflow the memory, hard to keep track of that when the code gets larger. I do intend to add at least two channels. A gain reduction input option would also be nice so it can be used in compressors.
If the internal A/D multiplexes two inputs to one A/D you may experience settling time issues in the Sample and hold, like I did... but I was running 12*** bit A/D and wider display range.

*** I rectified the raw audio inside the micro so only got 11 bits of raw dynamic range, but averaging the samples per the time constant added (some) resolution back.

The settling time error looked like crosstalk ghosting between a loud input and a quiet one. I overcame it by collecting a number of samples in a row and averaging them to remove the apparent crosstalk.  With only 10bit you may not have that problem.
  100ms looks a bit slow but it was a decision I made, as the thing I don't like about analog meters is the blurring effect some have, it kind of appear half bright when the level is in the limit, with digital ones the LED is even on or off but it can blur if the update is too fast. I might make things a bit faster and add some hysteresis or hold so it looks smoother. I wish to know Dorrough's parameters!
who?........    Dorrough actually got a patent for that POS (even after my patent was issued), which reveals how squirrely the PTO is.  You can read his patent but I wouldn't expect much insight. As I recall his slick lawyer invented some different legalese terms for peak and average to blind the examiners into thinking his meter was novel (also curved his display).  ::)
  I still have to add some option for the extended range, as with only 10 bits it get's a bit short, using 9 LEDs, 6dB steps I would get the dot lit a few LEDs up. If intended to be used with 16 LEDs 3dB steps this would still be an issue. This µC doesn't have PGAs, so I have to implement something external, hopefully simple enough. I want to try using a digital pin as input (set to high Z) when not attenuating and as output (low Z) to attenuate. I have to overcome the DC offset, not to shift when switching, so I don't loose the sample, and the trigger for this, which could be the last conversion to go up and the output to go down in gain.

JS
As always the devil is in the details... 

tag you are it.  8)

JR
 
JohnRoberts said:
In the console meter we offered a pushbutton set up that allowed Peak only, ave only, or both. I am not aware of anybody who didn't like both.
  I do like the floating bar as you see the amount of LEDs ON to show the crest factors, might be harder to read the VU in some cases, but useful when the crest factor is more important than the actual level. Peak only might be useful so it's faster to read when concerned about digital clipping.

If the internal A/D multiplexes two inputs to one A/D you may experience settling time issues in the Sample and hold, like I did... but I was running 12*** bit A/D and wider display range.

*** I rectified the raw audio inside the micro so only got 11 bits of raw dynamic range, but averaging the samples per the time constant added (some) resolution back.
The settling time error looked like crosstalk ghosting between a loud input and a quiet one. I overcame it by collecting a number of samples in a row and averaging them to remove the apparent crosstalk.  With only 10bit you may not have that problem.
  I can change the input source 1 conversion cycle (of 13) after the conversion trigger and the input is updated automatically so the next conversion sample and hold do it's job correctly, I shouldn't get delays because of this nor perceptible crosstalk. The idea is to widen the range using two input gains, but we'll see how it ends up. I'm most going for a single channel, the two channels would be nice option but with some limitations, noting there would be less LEDs the lower dynamic range is less of an issue, might not even need the auto gain in place.
who?........    Dorrough actually got a patent for that POS (even after my patent was issued), which reveals how squirrely the PTO is.  You can read his patent but I wouldn't expect much insight. As I recall his slick lawyer invented some different legalese terms for peak and average to blind the examiners into thinking his meter was novel (also curved his display).  ::)

As always the devil is in the details... 

tag you are it.  8)

JR
  While probably nothing really novel in the design they do look quite nice and precise, there are also available in software now and I think still quite useful. We care more for a useful measurement than a funny light show, that's what christmas is for... I haven't even looked for an open source project for this but I don't think it would save me a lot of time, I don't know either how useful this would be for anyone else as each person has it own needs and might end in the same hole I am, I'll try to make things (hard and soft) easy enough so someone without much experience can make use of this. I think the Arduino also helps with this as is quite easy to get it up and running, I have a really bad experience with NXP products in this respect, as I could not get to work some demoboards after quite a few attempts.
  As a good practice I like not to comment my code unless really needed, as when debugging you usually update the code but not the comments they are a dangerous tool, self explanatory code is usually better, variable and function naming being one of the. For a project open to the community it might be necessary so in the final version I can add some of it.

JS

 
joaquins said:
Making the processing inside the conversion complete interrupt isn't optimal but saves some time and makes every captured sample to be processed, this could be optimized but I think it can work as a first step.

Don’t do the math in the ISR.  Store the sample into a global, set a “Got new sample” flag and get out. The main loop will test for the flag, and when true it shloud clear the flag and do the math.

In the simplest systems, spending a lot of time in the ISR might be ok, but if you port the code to a larger system with more interrupts, you’ll end up with problems you can’t debug.
 
Andy Peters said:
Don’t do the math in the ISR.  Store the sample into a global, set a “Got new sample” flag and get out. The main loop will test for the flag, and when true it shloud clear the flag and do the math.

In the simplest systems, spending a lot of time in the ISR might be ok, but if you port the code to a larger system with more interrupts, you’ll end up with problems you can’t debug.

  Thanks for the suggestion! I do have that in mind (and not to use goto but sometimes I do anyway) and that was my first approach, but debugging using the serial port here and if I do what you say I end up loosing quite a lot of samples. Also the way I'm updating the outputs it's quite slow I think, same thing with the output updating.
  This way is easier to develop and keep things working while making other decisions.

JS
 
joaquins said:
Also the way I'm updating the outputs it's quite slow I think, same thing with the output updating.
  This way is easier to develop and keep things working while making other decisions.

JS

Trying to keep up. 
?????
Thanks!
Best,
Bruno2000
 
Andy Peters said:
Don’t do the math in the ISR.  Store the sample into a global, set a “Got new sample” flag and get out. The main loop will test for the flag, and when true it shloud clear the flag and do the math.

In the simplest systems, spending a lot of time in the ISR might be ok, but if you port the code to a larger system with more interrupts, you’ll end up with problems you can’t debug.

Agreed in general. This is how you get two of the best kind of bugs -- timing (and data) dependencies that show up some of the time, and Heisenbugs where adding debugging changes the timing sufficiently to make the problem (dis)appear.

On the other hand, if you are careful your signal processing need not take significantly more time than the interrupt overhead itself. I regularly apply a simple first order filter to data in an interrupt routine (filt -= filt >> 8; filt += (int32_t) (newData);); on AVR using a 32-bit accumulator with 16-bit ADC data this takes less than 20 cycles, with about half being loads/stores.

JDB.
[and if that's more latency than you can handle, it's really time to switch to DMA]
 
Andy Peters said:
Don’t do the math in the ISR.  Store the sample into a global, set a “Got new sample” flag and get out. The main loop will test for the flag, and when true it shloud clear the flag and do the math.
+1...  while it is six of one half dozen another if you run out of processor time, Keeping interrupt service short and sweet is good practice.
In the simplest systems, spending a lot of time in the ISR might be ok, but if you port the code to a larger system with more interrupts, you’ll end up with problems you can’t debug.
The proc family I used supports different interrupt priority (higher interrupts can interrupt lower interrupts). Keeping the interrupts short reduces contention and possibility for unexpected results.

When undersampling there might be some merit in not making the sample follow a hard interrupt, but trigger it from the main loop after crunching the data to slightly randomize it. While not an issue for a DIY project, when making a commercial product you do not want to sell a meter that drops to a null at the sample frequency because some customer will find it.

JR
 
squarewave said:
Why use an ISR at all?

Indeed a good point, one can simply poll the convert-done register flag in the main loop, and if it's true, then clear the flag, grab the samples and do the math. It depends on how the particular micro deals with its converter.
 
joaquins said:
  Thanks for the suggestion! I do have that in mind (and not to use goto but sometimes I do anyway) and that was my first approach, but debugging using the serial port here and if I do what you say I end up loosing (sic) quite a lot of samples. Also the way I'm updating the outputs it's quite slow I think, same thing with the output updating.

I remember a first project at a job a bunch of years ago. It was fixing the serial interface on a temperature control board. The board had a DS5000T (8051 variant with an on-board battery to keep the program memory alive, this was before flash-based micros) and the micro ran a PID loop at some reasonable rate and used the standard 8051 UART to communicate with a host processor (a 68K-based VME board).

Well, the thing was dropping characters in the communications. The previous engineer (a PhD in controls theory, who was a whiz at MatLab but didn't have any practical experience in embedded micros) thought the problem had to be EMC or some other cable/interface related issue, so there was a lot of work in trying to shield the communications cable (which was short, maybe a foot) and all that. Also the thing wasn't regulating temperature.

I got the code, all written in C (and yes, I refuse to do assembly language for these micros, there is no reason to do so). It used one of the timers to pace the PID loop update. I noticed two things which were glaring red flags.

One, he had implemented the PID loop math updates in the timer tick ISR. He was spending so much time in the ISR that it was blocking the UART interrupts, so the status messages weren't getting to the host and the commands weren't getting read by the micro.

Two, he was doing the PID loop math using floating-point arithmetic. In an 8-bit micro. Did he realize how expensive a floating-point multiply is on such a machine? Obviously not. Did he realize that the floating-point operations, as handled by the compiler (the Avocet C compiler), are not re-entrant so you can't use them in an ISR?

So, yeah, do as little as you can in the ISR.
 
squarewave said:
Why use an ISR at all?
Because there's a generation of system architects that need to design for battery powered operation, and polling in a micro is polar opposite to that goal. :)

In general, the approach Andy outlines works Best (TM), in that a) the micro can suspend itself until the next interrupt occurs meaning that most of the time is spent in low power states, and b) it's quite a bit easier to debug traces, because the micro hasn't filled it's caches with a gazillion compare/branch statements when reading the data registers of the ADC.

My favorite trick to make debugging easier is to use a spare pin as an 'I'm in ISR processing', meaning you set the pin high at the start of the ISR (generally only a few cycles), and clear it at the end (only a few more).  You can then look on your scope and see if any of the ADC interrupts are being lost (a quick filter in the scope, plus just running the system until the filter matches), and it also tells you to the total time in your ISR (and if your ISR is getting stuck).
 
Matador said:
Because there's a generation of system architects that need to design for battery powered operation, and polling in a micro is polar opposite to that goal. :)
But in this case you want the thing to just run at full-tilt. An Arduino is going to top-out at about 14k samples per second so I don't think you want to do any idling. Just call
Code:
while (!(ADCSRA & (1 << ADIF)));
in your loop, read the register, do your math on the sample and if it's time to update the LED array do it. Then loop around and do it again. No ISR necessary. In fact, I venture to guess it would be slightly slower.
 
bruno2000 said:
Trying to keep up. 
?????
Thanks!
Best,
Bruno2000
  Trying to wake uo after 2hr sleep (before going to work) good thing the PLC got to do what I wanted quite fast and only a few tweaks from the code I brought with me.

  My first approach was to only write the value in a global variable and write a flag inside the ISR but as I said the serial communication was taking too much time. I'll try to avoid that. I'm having trouble to do the processing, I wish I have fixed point arithmetics so I can do the filter in a single operation without loosing precision and not have to worry about overflows. Libraries with floating point are not great here.

  Free running the ADC it's a bit easier than managing the flags and read in safe moments. I guess I could figure something out and do it all in the main, not that hard either, read the conversion ready flag, read, start the new conversion.

JS

 
Matador said:
Because there's a generation of system architects that need to design for battery powered operation, and polling in a micro is polar opposite to that goal. :)

In general, the approach Andy outlines works Best (TM), in that a) the micro can suspend itself until the next interrupt occurs meaning that most of the time is spent in low power states, and b) it's quite a bit easier to debug traces, because the micro hasn't filled it's caches with a gazillion compare/branch statements when reading the data registers of the ADC.

My favorite trick to make debugging easier is to use a spare pin as an 'I'm in ISR processing', meaning you set the pin high at the start of the ISR (generally only a few cycles), and clear it at the end (only a few more).  You can then look on your scope and see if any of the ADC interrupts are being lost (a quick filter in the scope, plus just running the system until the filter matches), and it also tells you to the total time in your ISR (and if your ISR is getting stuck).
I guess I didn't get that memo, but I am self taught at embedded processor design so make it up as I go.. 

I sell a battery powered product and battery life is always a concern, so for me off, means off....  It takes a little extra discrete glue to hold up the PS long enough for the micro to boot, but zero battery drain when off is a good thing (IMO).

JR
 
  New code, better averaging, HPF filter to remove DC errors, no overflow, no rounding errors, means 9 LEDs 6dB steps, the last one would turn off with the input shorted (not with the input connected, all too messy, too much noise picked up) . I have 10k resistor in series and 11k load in the offset network (two 22k resistors), so about 6dB input attenuation. This mean all the dynamic range of the converter is being effectively used, this looks like a usable dynamic range for me. I haven't tested for precission as noise in the thing wouldn't let me. For more dynamic range run an FFT or an external PGA.

  Everything is done outside the ISR using squarewave method, I still need to call it so the converter runs in free mode or it refuses to work, but at least it's empty.
 
  For the distracted I'm using A1 input now as it got connected just because, easier to change one bit than one jumper I guess. To get 14 LEDs for digital pins 0 to 13 the code seems to be there, just changing the for loops, shift the //NOW THE LIGHTS loop one place and change the last two digital writes to 13.

  For extra steps the analog inputs should be used as digital outputs, I'd probably assign those to the highest levels so they don't switch at the lower range preventing noise in the ADC when signal is low. Also the L[14] vectors should be updated for the required values.

Enjoy the code:
unsigned long timeLEDs=0UL, delayLEDs=100UL, sampleAbs=0UL, VU=0UL, VUAux=0UL, maxPeak=0UL, maxPeakAux=0UL;
long int sample=0, DC=1050624UL;//DC513*2^11
unsigned int VUo=0U, maxPeako=0U;
//1.5dB steps:
//unsigned int L[14]={6910U,8192U,9759U,11585U,13785U,16384U,19565U,23170U,27636U,32768U,39037U,46341U,55142U, 65535U};
//3dB steps:
unsigned int L[14]={723,1024,1447,2048U,2893U,4092U,5793U,8192U,11585U,16384U,23170U,32768U,46341U, 65535U};
//6dB steps:
//unsigned int L[14]={16,32,64,128,256U,512U,1024U,2048U,4092U,8192U,16384U,32768U, 65535U};
int i=0;
ISR(ADC_vect){
}
void setup(){
  ADMUX=  0b01000001;
  ADCSRB= 0b01000000;
  ADCSRA= 0b11101111;
  for(i=4; i<13;i++){
    pinMode(i,OUTPUT);
  }
}
void loop(){
  while (!(ADCSRA & (1 << ADIF)));
                                          //MATH HERE
  sample=ADCL+(ADCH<<8 )-(DC>>11);
  DC+=sample;
  sampleAbs=abs(sample);
  sampleAbs=sampleAbs<<12;
  VUAux=(VUAux*2047+sampleAbs)>>11;
  maxPeakAux=(maxPeakAux*2047)>>11;
  maxPeakAux=max(sampleAbs,maxPeakAux);
                                          //NOW LIGHTS
  if(millis()>timeLEDs){
    timeLEDs=millis()+delayLEDs;
    maxPeak=maxPeakAux>>4;
    VU=VUAux>>4;
    for(i=4; i<12;i++){
      if(VU>L[i ]||(maxPeak>L[i ]&&maxPeak<L[i+1])){   
            digitalWrite(i,HIGH);}
      else{ digitalWrite(i,LOW);}
    }
    if(VU>L[12]||(maxPeak>L[12])){             
            digitalWrite(12,HIGH);}
    else{  digitalWrite(12,LOW);}
  }
}

JS
 
Matador said:
Because there's a generation of system architects that need to design for battery powered operation, and polling in a micro is polar opposite to that goal. :)

In general, the approach Andy outlines works Best (TM), in that a) the micro can suspend itself until the next interrupt occurs meaning that most of the time is spent in low power states, and b) it's quite a bit easier to debug traces, because the micro hasn't filled it's caches with a gazillion compare/branch statements when reading the data registers of the ADC.

In a micro like the SiLabs 8051 variants, where the conversion-done flag is in a bit-addressable register, then I think there's no real benefit to using the interrupt.

Consider: the 8051 has a BDATA space, which lets you declare a variable as type bit, and there are operations that work on bits (instead of bytes).  There's all one-cycle operations. In the main loop, then, you can test a bit flag set in the ISR, or you test the convert-done bit directly, and the result is the overhead of the ISR makes that slightly slower. (Remember, in either case, to clear the convert-done bit.)

Now, do that in a processor without the bit-addressable data space. If the converter status register is multiple bits and you have to mask off the bits you don't care about when doing the polling, then the test becomes more complicated. OK, a couple more clock cycles.

I still prefer to use an ISR and capture the data and set the flag, and have the main loop check the flag and process if true. Why? Because the ISR's job is to fetch the last conversion result (and perhaps filter it, as JD suggests). When the main loop gets around to testing the flag, the conversion result doesn't need to be fetched, it is just sitting there in a register, waiting.

This gets more interesting when you start adding other peripherals to the mix.

You're using the UART? Let the ISR handle receiving new bytes and shoving them into a FIFO. When the ISR detects an end-of-line character, set a flag. The main loop just looks for the flag, and it doesn't have to concern itself with reading every byte until a packet is ready.

You're sending a bunch of register values to some peripheral chip over I2C? Set a data-to-send pointer to the start of a block of memory which holds those register values. Set a register indicating number of bytes to transmit. Fire off the I2C, and let the ISR deal with the transmission, wake me when you're done.

My favorite trick to make debugging easier is to use a spare pin as an 'I'm in ISR processing', meaning you set the pin high at the start of the ISR (generally only a few cycles), and clear it at the end (only a few more).  You can then look on your scope and see if any of the ADC interrupts are being lost (a quick filter in the scope, plus just running the system until the filter matches), and it also tells you to the total time in your ISR (and if your ISR is getting stuck).

This trick is gold, and is useful even if you have in-chip debug, because we all know how hard it is to debug a real-time system. On 8051, the SETB and CLRB instructions are one cycle, so no real impact on what you're doing.
 
squarewave said:
Why use an ISR at all?

[Edit: Andy kinda just said this, in a slightly different way  :) ]

For embedded design, learning to write from the perspective of a state machine  prevents a lot of hunting and debugging as the solution becomes more complex.

If you have an a/d, and a pushbutton, using an ISR to quickly handle the interrupts that occur (I typically just set a flag saying something has occurred), and a task to handle servicing of the interrupt, keeps the code clean, and allows integration with other code with few bumps in the road.

The features of the mcu are there to help you, ignore them as you will =)
 
What is the overall block diagram of the algorithm?  Is it essentially:

1) Rectify the incoming signal (e.g. make all negative samples positive)
2) Pick an averaging window - say 100ms
3) Accumulate and average 100ms of samples
4) Convert the average voltage to a quantity of LED's to light
5) Light the LED's
6) Goto 1
 
Matador said:
What is the overall block diagram of the algorithm?  Is it essentially:
I don't know about block diagrams but I will share what I did...
1) Rectify the incoming signal (e.g. make all negative samples positive)
This forfeits one bit of dynamic range but yes I rectified inside the micro. (making my 12 bits only 11 bits.) There is no + and - inside the micro running from a single supply, so I first grabbed an AC zero, then subtracted that from the sample to rectify. That moving AC zero is kind of like a HP filter removing the DC and very slow changing signals, like settling.
2) Pick an averaging window - say 100ms
not sure what exactly that is... I used code to mimic an RC attack time and different RC release time.  The beauty of doing this inside the digital domain is you don't have to use steering diodes for time constants. 

I keep a running average of the signal and compare each sample to that running average. If higher, that difference gets multiplied by a factor that represents the attack time constant and added in, if smaller the difference get multiplied by the release factor and subtracted. 
3) Accumulate and average 100ms of samples
constantly sampling.... I mentioned capturing a string of samples and averaging to deal with S/H settling time, but a single input does not need to worry about that.
4) Convert the average voltage to a quantity of LED's to light
can be pre-crunched look up table.  Even RMS can offload realtime processor clock time by using a precalculated RMS look up table.
5) Light the LED's
in my design I was pushing tens of LEDs so use serial com for this, not every sample.

Yes there was a regular interval for updating the display and sundry crunching.
6) Goto 1
That sounds pretty easy... Just do it....

JR
 
Back
Top