Posted by on August 19, 2011

## I know why you are here! — I’ve been there!

Trying to find useful notes on how to accomplish anything is often a daunting task, mainly due to the fact that the Internet suffers of very low SNR and applications lately have been catering to dummies only.

While there are some very interesting documents out there, you still have to go through a lot of dirt to find a few gold pebbles!

### Let’s jump right on it.

I won’t bother you with the technical aspects of ADCs and their different implementations, I will instead jump right on what you need to know in order to get started with them. You obviously understand what ADC stands for and you generally have a good idea of what they do.

First of all, the msp430g2131 has an 8 channel ADC port, the type of ADC used is  a 10bit SAR.

The letters stand for Successive Approximation Register – Analog to Digital Converter, this ADC actually uses a DAC to sweep and approximate the input using a comparator, for this reason a voltage reference is required.

### Calculating the actual input voltage from the resulting sample:

Let’s assume your voltage reference is 2.5v, this means 2500mV. Now, we have 10 bits of resolution, that means ~1024 steps are available (2^nBits).

With this information, if you were given a sample with a value of “511”, you’d know the input voltage would be around 1.25v.

Thus, we can formulate a very simple approach to obtain a meaningful value out of the given sample:

`input_millivolts = ( adc_sampled_value * reference_voltage ) / ( 2^adc_bits )`

If we — for some reason — wanted to perform the inverse operation, one way of doing it would be:

`adc_sampled_value = ( input_millivolts / reference_voltage ) * ( 2^adc_bits )`

Of course, if we have a “compressed” range, we must expand it after conversion to obtain the actual voltage at the input of the circuit. You would simply multiply the input_millivolts variable by the ratio used in your input divider.

### Accuracy vs Resolution:

The accuracy of your conversion will be dictated by process variance, component tolerances and circuit stability, whereas the resolution is defined by the overall available range of values; that is the amount of meaningful bits.

### Using it…

So now you understand that the input voltage must be within the reference voltage. This means that we must perform input conditioning in order to achieve meaningful results.

For example, If you wanted to measure 0 to 10 volts, then of course we’ll be compressing a bigger voltage range into a smaller one and this tells us one thing: we’ll lose resolution. Now, there are ways to enhance our results by over-sampling for example, but that’s outside the scope of this guide.

Naturally, one way to “compress” this range would be to use the modest voltage divider: two resistors. This works just fine for low complexity circuits and low sampling rates. In this particular example we would obtain about a quarter of the resolution you’d have if you were working with a 1:1 input / reference ratio.

When you dive into higher sampling rates and resolution, suddenly other factors come into play, for example the accuracy and stability of your reference voltage (you may not use an internal reference anymore!) and noise, and…! — It gets harder than the pope at the candy store.

### Closing up…

While I mentioned a specific device (it merely pertains to TI’s LaunchPad) the same principle applies to any platform as long as there is a SAR ADC being used. There are other topics involving ADCs such as dithering, operation modes, input conditioning (not only voltage scale but also sample and hold, etc).

However for the most part, this short text should provide you with the necessary tools to obtain a raging epiphany should you be a beginner.

Cheers.

PS: Comprehension comes first, this is why I didn’t bother to paste any code. One must first understand the basics.