Quality assurance of routine data (3/10)
3. What influences routine data quality?
a) General remarks
It is important to assess the coverage of the HMIS (in our case for nutrition indicators. If coverage is low, nutrition data from HMIS will probably be biased and therefore not useful.
Routine data quality issues are similar for most indicators. As shown in the figure below, the main issues are related to the accuracy and completeness of the numerator (1 and 2) as well as to the estimation method and the accuracy of the denominator (3).
Source: Countdown to 2030 for Women’s, Children’s and Adolescents’ Health - Presentation in analysis workshop of health facility data for key health system performance indicators, May 2019
Data quality is influenced during and after data collection. Three basic factors affect data quality (Measure Evaluation) of program-level results when compared over time:
- Instrumentation – Instrumentation refers to the way in which data are collected. The methods used to collect and compile results during one reporting period may not be the same methods used to collect and compile results during the next reporting period. As a result of this “measurement bias” the two sets of results may not be directly comparable. (for example weighing scales are not of the same type or not calibrated correctly)
- Programmatic – The results from one reporting period could appear inconsistent with the equivalent results from another reporting period because of real changes in program implementation and increased or decreased program activity. (for instance expansion of breastfeeding promotion sessions at health center, to include home visits in next reporting period or the facility can be closed because the nurse went for training or to get the drugs at the district pharmacy, etc.)
- Measurement – Changes in indicator definitions could result in program-level results being measured in different ways across time periods. In this case, the results from one reporting period would not necessarily be directly comparable with the results from another reporting period. This might also be linked to the quality (of training of) the enumerators.