Saturday, March 17, 2012

Astronomy Cast: Precision



So we deal with two basic variables. One is how precise is your measurement? And that basically says, if I take a measurement and I repeat it over and over and over and over, the values are either tightly bundled together, or if it’s not a precise result, they’re spread out a lot. So the example we use when we’re teaching is we’re throwing darts. If you’re a highly precise dart thrower, all your darts are going to land within a half an inch of each other. If you’re an un-precise dart thrower, all of your darts are going to land spread out over two, three meters, maybe, on the wall. You’re taking up the whole wall with your five darts, so precision is how closely spaced are all of your results. And so if you’re, like, doing some kind of scientific research, and you’re looking for some expected outcome, you’re going to want the expected outcome to be precise; otherwise, it’s just going to be random noise. at a certain level, we always start off with a fair amount of noise in our results. As people have worked on trying to define the expansion rate of the Universe, they have gone from plus or minus a few 100 km/second in the early years of trying to make these measurements, to plus or minus a few km/second. So over time, we get our results closer and closer; the error bars on the age of the Universe have gotten smaller and smaller. You always start off with less precise results, and get better and more refined as you go, but precision and accuracy aren’t the same thing, and this is one of the things that we have to worry about when we start looking at things like the “neutrinos moving faster than the speed of light” problem. We have to worry about two different things. It’s the systematic offsets in our measuring, which is where we worry about the neutrinos, but sometimes we’re just missing a term in our equations, and that doesn’t so much go into precision and accuracy as we just missed a term in our theory, so that’s…different bin.  Any time we make a measurement, there’s going to be some sort of inherent noise in it. There’s going to be…if we’re trying to detect light, there’s just this constant steady stream of photons at all colors that are creating this noisy background. There’s going to be just minor fluctuations when you try and make measurements with a ruler, measurements with a laser, measurements with any tool, and if the noise is truly random, what you should end up with is all of these different variables, all of the different ways that things can go wrong, if they’re random, work out to form what’s called the Gaussian Distribution, a bell curve, a normal distribution, such that the majority of the measurements are going to be pretty close to the same value. And this means that if you take all of your points and you plot them, you’ll end up with plus or minus 34.1% of your values, so 34.1% that are too high, and 34.1% are too low – those count as one Sigma off of accurate. So you end up with a curve where one Sigma is plus or minus 34.1% of your values.

No comments:

Post a Comment