I’ll often hear my customers say something along the lines
of “I really wish we could get a more accurate consistency transmitter”. When I dig a little deeper, I usually find
that the problem isn’t necessarily so much an issue with the transmitter’s
accuracy, but rather, with the way that the calibration for that transmitter is
built and managed.
What do I mean?
When somebody installs a consistency transmitter, what they usually
want is for that transmitter to generate a consistency number that agrees with
whatever their lab says. If the lab analyzes
a stock sample and says its 3.4%, they want the transmitter to say 3.4%, too.
Of course, the reality is that the transmitter will never actually read 3.4%. It will
always be different. Always.
What most people don’t fully understand is that this behavior
is absolutely normal. The question isn’t
if the transmitter and the lab disagree, because they will. The real question is how much can the lab and
transmitter disagree before you start to think that you’ve really got a problem.
And that’s a question that far too few people ask. The consequences of not asking that question can
be severe. Take this scenario, for
example.
The operators get a consistency reading, X, from the
lab. The value of X doesn’t agree with
what the transmitter is saying, which is reading Y.
The operators call the E&I shop and tell them that they need to go
and service the transmitter so that it reads X instead of Y.
In a couple of hours, or maybe the next day, an E&I tech
goes to the transmitter in question and adjusts the output so that it now reads
X. When the operators get the next lab
analysis, they note that while the transmitter is now reading X, the lab now
says it's actually Z. So, again, they
call up the E&I shop and tell them that transmitter has drifted again, and now
they need to make it read Z instead of X.
The E&I shop dutifully dispatches another tech, probably
not the same one as before, who goes to the transmitter and readjusts it to now
read Z. It’s a good bet, by the way, that the first
and second tech don’t use the same method to adjust the transmitter. The next lab sample evaluation comes, and
now, it’s W, instead of X, Y, or Z. By
this time, everybody starts calling the transmitter “that damn transmitter” and walks around cussing out the vendor for his
piece of s**t instrument.
Does that sound familiar?
This is a situation that can be avoided if one knows the
statistics behind a calibration. When
you build a calibration, there is something that you have accept as if it were
the Gospel truth: the instrument is
better than your lab.
What I mean by that is that your instrument will likely
respond pretty much the same way given the same process conditions with very
little variation. Most undamaged and
properly functioning instrumentation will respond with less than 1% of variation.
The same can’t be said of the lab.
This doesn’t mean that your lab is bad, or that your lab
techs are lazy slobs. It’s just the
nature of the beast when it comes to manual analyses.
The TAPPI T-240 method for establishing consistency, for example,
reports a repeatability standard of only 10%, and that presumes that your
sampling techniques isn’t adding a whole lot more error.
In real terms, that means that you would expect that if the
first lab analysis said 3.0%, then 95% of the time, you would expect a second
analysis of the same stock to not deviate by more than 0.3%, or in other words,
to come in somewhere between 2.7% and 3.3%.
That’s a pretty big range.
And you get that if you’re doing it properly.
Now, what does this mean, exactly?
It doesn’t mean that your consistency readings are
worthless. Quite the contrary, it means
that you now have a benchmark by which you can gauge the health of your
consistency readings.
Remember that the instrument is probably responding to real
changes in consistency with a repeatability variability of less than 1%. If your calibration was built properly in the
first place – a topic for another column, by the way – then if the lab
typically reports X when the meter is reading Y, you shouldn’t get excited
until the lab value exceeds X +/- 10% of X.
Remember, 95% of the time, you expect that lab to get within
that 10% of X. If it is within that 10%,
then, everything is working OK and you shouldn’t mess with your transmitter or
the calibration.
I wouldn’t even get excited if you occasionally get a
reading in excess of 10% of X. Remember, 95%
of the time you expect it to be less, but that also means that 5% of the time,
or about once every twenty times, it could be more than 10% and everything would
still be OK.
So when should you get excited?
If your readings are consistently deviating more than that
10% of X, then it’s time to review the situation.
If your variation is always positive, or always negative, then it’s time to review the situation. A valid calibration should see both positive and negative variation.
I stop here for now, but I’ll leave you with one more
thought. You can do a lot better than that
10%. More on that in a future column.