Systematic error as the name implies is a consistent or reoccurring error that is caused by incorrect use or generally bad experimental equipment. With systematic error, you can expect the result of each experiment to differ from the value in the original data.
This is also known as systematic bias because the errors will hide the correct result, thus leading the researcher to wrong conclusions
In the following paragraphs, we are going to explore the types of systematic errors, the causes of these errors, how to identify the systematic error, and how you can avoid it in your research.
Use for free: Equipment Inspection Form Template
There are two types of systematic error which are offset error and scale factor error. These two types of systematic errors have their distinct attributes as will be seen below.
Before starting your experiment, your scale should be set to zero points. The offset error occurs when the measurement scale is not set to zero points before weighing your items.
For example, if you’re familiar with a kitchen scale, you’ll notice it has a button labeled tare. This tare button sets the scale to point zero before you weigh your item on the scale. If this tare button is not used correctly, then all measurements will have an offset error since zero will not be the starting point of the values on the measurement scale.
Another name for offset error is the zero-setting error.
Read: Research Bias: Definition, Types + Example, Types + Example Research Bias: Definition, Types + Exam
This is also known as multiple errors.
The scale factor error results from changes in the value or size of your scale that differs from its actual size.
Let us consider a scenario whereby your scale repeatedly adds an extra 5% to your measurements. So when you’re measuring a value of 10kg, your scale shows a result of 10.5kg.
The implication is that, because the scale is not reading at its original value which should be zero, for every stretch, your measurement will also be read incorrectly. If the scale increases by 1%, your reading will also increase by 1%. What scale factor error does is that through percentage or proportion, it adds or deduct from the original value.
One thing to note is that systematic error is always consistent. If it brings a value of say 70g at the first reading, and you decide to conduct the measurement again, it will still give you the same reading as before.
Also, both offset error and scale factor error can be plotted in a graph to identify how they differ from each other.
Look at the graphs below, the black line represents the result of your research data being equal to the original value and the blue line in the graph represents an offset error.
We have earlier established that offset error shifts your data value by increasing or decreasing it at a consistent value. If you observe the blue line, you’d realize it added one extra unit to the data.
The second graph with a pink line represents the scale factor error. Scale factor error shifts your data value proportionally, to a similar direction.
Here, all values are shifted proportionally the same way, but by different degrees.
Read Also – Type I vs Type II Errors: Causes, Examples & Prevention
The two primary causes of systematic error are faulty instruments or equipment and improper use of instruments.
There are other ways systematic error can happen in your experiments, and these could be the research data, confounding, the procedure you used to gather your data, and even your analysis method.
Briefly, we will discuss the two primary causes of systematic error and also look at one other cause which is known as the analysis method.
When a researcher is ignorant, has a physical challenge that can cause an effect on a study, or is just careless, it can alter the outcome of the research. Preventing any of the above-listed traits as a researcher can immensely reduce the likelihood of making errors in your research.
Systematic errors can happen if your equipment is faulty. The imperfection of your experiment equipment can alter your study and ultimately, its findings.
As a researcher, if you do not plan how you’ll control your experiment in advance, your research is at risk of being inaccurate. So to reduce the risk of error in your research, try as much as possible to limit your independent variables to only one. The lesser your variables in an analysis, the more chance you have to error-free research.
Generally, a systematic error can occur if you as a researcher repeatedly take the wrong measurements, if your measuring tape has stretched from its initial point perhaps because of many years of use or if your scale reads zero when a container or your value is placed on it.
Read: Survey Scale: Definitions, Types + [Question Examples]
The effect of a systematic error in research is that it will move the value of your measurements away from their original value by the same percentage or the same proportion while in the same direction.
The consequence is that shifting the measurement does not affect your reliability. This is because irrespective of how many times you repeat the measurement, you will get the same value. However, the effect is on the accuracy of your result. If you’re not careful enough to notice the inaccuracy, you might draw the wrong conclusion or even apply the wrong solutions.
You cannot easily detect a Systematic error in your study. In fact, you may not recognize systematic errors even with the visualization method.
You need to make use of statistical analysis to identify the type of error present in your research and assess the error. When your study findings have the desired outcome, then there is no systematic error.
You can also identify the systematic error by comparing the result from your analysis to the standard. If the two results differ, then there may be systematic bias.
You can use standard data or known theoretical results as a reference to detect and determine the systematic errors in your research.
Usually, when you analyze the result of your data, you may expect your result to show that your original data are randomly distributed. However, the consistent increase or decrease of the result of your research data will tell you if a systematic bias exists in your data.
Once you identify any systematic error in your research, apply the treatment condition to correct it and calibrate immediately.
Read: Survey Errors To Avoid: Types, Sources, Examples, Mitigation
We are going to look at the following examples to better understand the concept of systematic error.
Example one:
Let’s assume some researchers are carrying out a study on weight loss. At the end of the research, the researchers realized the scale added 15 pounds to each of the sample data, they then concluded that their finding is inaccurate because the scale used gave a wrong reading. Now, this is an example of a systematic error, because the error, although consistent, is inaccurate. If the researchers did not realize the disparity, they would have made a wrong conclusion.
This example shows how systematic errors can occur in research because of faulty instruments. Therefore, frequent calibration is advised before conducting a test.
Example two:
When measuring the temperature of a room, if your thermometer and the room you’re measuring are in poor contact, you will get an inaccurate reading. If you repeat the test and your thermometer still has low thermal contact with the room, you will get constant results even though inaccurate. Here, the thermometer is not faulty, the cause of the error is the researcher’s wrongful handling.
These two examples show that systematic error can arise from faulty instruments and wrong usage of the instrument. If you do not get a result close to the true value of your data, consider identifying the cause of the error and how to reduce it.
Once you can identify the cause of a systematic error, you should be able to reduce its effect on your data to a great extent.
The issue, however, is that systematic errors are not easily detectable. This is because your equipment cannot talk, so you won’t get a warning signal, and regardless of how many times you conduct the test, you will arrive at the same result which can be confusing.
So how should you go about this? You should first make sure that you understand your equipment and its features.
Use For Free: Equipment Maintenance Log Template
This will allow you to know the limitations of your equipment. If you are using a voltmeter on a circuit, it may give you varied voltage readings depending on the condition of whether it’s a high current or low current voltage.
If you’re conducting a test on a computer program, confirm in advance if the program works accurately. You can do this by testing data whose value had been previously determined. That way you are certain of what the outcome should be so if you get a different result, you know something is not right.
Once you understand where the issue is, you can reduce systematic error by properly setting up your equipment. Test your equipment before conducting the actual reading and always compare the value from your reading against the standards or theoretical result.
As the name suggests, random error is always random. You cannot predict it and you cannot get the same reading if you repeat the measurement or analysis. You will always get a unique value (random value).
For example, let us assume you weighed a bag of grains on a scale, the first time, you might get a value of 140 lbs, if you tried again, you may arrive at 125 lbs. Regardless of the number of times you repeat the measurement, you will get a different or random error.
Systematic errors are always consistent with the error. Even if you repeat the process, you will still arrive at the same error.
For example, if your measurement is 80g, and your measuring tape is stretched by 99mm, either as an addition or deduction, if you repeat the measurement, the reading you will get will be the same as before. The error will be consistent.
Systematic errors can be eliminated by using one of these methods in your research.
Triangulation: This is the method of using over one technique to document your research observations. That way you don’t rely on one piece of equipment or technique. When you’re done with your testing, you can easily compare the findings from your multiple techniques and see whether they match or they don’t.
Frequent calibration: This means that you compare the findings from your test to the standard value or theoretical result. Doing this regularly with a standard result to cross-check can reduce the chance of systematic error in your research.
When you’re conducting research, make sure you do routine checks. If you’re wondering how often you should perform calibration, note that this generally depends on your equipment.
Randomization: Using randomization in your study can reduce the risk of systematic error because when you’re testing your data, you can randomly group your data sample into a relevant treatment group. That will even the sample size across their groups.
Both systematic error and random error are not to be desired. However, systematic error is a more difficult problem to have in research. This is because it takes your findings away from the correct value and this can lead to false conclusions.
When you have a random error in your research, you know that the result of your measurements can either increase or decrease just a little from the real value. If you average these results, you are likely to get close to the actual value. But if your measurements have a systematic error, your findings will be rather far from the true value.
You may also like:
In this article, we’ll discuss the effects of selection bias, how it works, its common effects and the best ways to minimize it.
In this article, we’ll go through the concept of meta-analysis, what it can be used for, and how you can use it to improve how you...
Simple guide to understanding research bias, types, causes, examples and how to avoid it in surveys
In this article, we’ll explore the concept of quota sampling, its types, and some real-life examples of it can be applied in rsearch