Evaluation is the final phase of the statistical cycle and a key component of the statistical collection activity. It is a measure of how well the different activities of the collection process met the objectives of the statistical collection. Evaluation provides critical feedback which can be used to improve systems, procedures and content used in the statistical collection. Ideally, a statistical activity should be evaluated at each stage of the statistical cycle followed by a final evaluation. This chapter examines the different assessment methods available for an evaluation strategy.
8.1 EVALUATION STRATEGY
It is good practice to include evaluation procedures as part of continuous improvement of systems and procedures in data collection, processing and dissemination. The evaluation strategy is as much a part of developing any new statistical activity as it is in reviewing a completed activity.
An evaluation strategy should specify a standard set of quality measures and key performance indicators and how these will be measured or assessed. In agencies where a statistical activity is shared across different areas, it is useful to specify reporting responsibilities of each participating area in the evaluation process.
The choice of evaluation methods depends on the stage of the statistical activity being evaluated. Key areas to include in an evaluation strategy might be:
· Use of standard data quality measures such as the one described in section 2.5 Data Quality framework in the Handbook
· use of process mapping with key performance indicators
· an assessment of outcomes against the initial specifications
· an assessment of clients’ use of data as per the collection objectives
· an assessment of client satisfaction with outputs
· an assessment of the timely completion of activities as per the initial plan
· an assessment of the completion of activities within the budgeted cost
· an evaluation of the field performance of a data collection
The resources used in an evaluation should be in line with complexity and size of the statistical collection. It is suggested that the data quality framework and criteria outlined in Section 2.5 in the Handbook should be used as part of an evaluation of a collection or administrative dataset. Using this framework to answer key questions about the quality of the collection will assist in developing further improvements. This is particularly relevant when the collection is to be repeated. The results of the evaluation should be well-documented and stored along with the other documentation pertaining to the collection or dataset.
8.2 EVALUATION OF COLLECTION PROCESSES AND SYSTEMS
Formal evaluation procedures or reviews may be conducted periodically or in response to significant quality concerns. Periodic reviews are useful to maintain and improve ongoing administrative and statistical systems to ensure that an acceptable level of quality is maintained. Reviews may also be required to assess the impact of significant changes to a statistical process (e.g. the modification to a collection IT system, processing methods, or changes in user needs).
It is also useful to conduct an independent review of a collection’s objectives and systems and processes either prior to undertaking the collection or as part of the ongoing collection process. This will highlight potential problems which may not be apparent when developing the collection. This review may be conducted by experts from other areas within the agency or by external consultants.
8.2.1 Types of Reviews
Three broad review types are outlined here. These are reviews of effectiveness, efficiency and quality.
Reviews of Effectiveness
These reviews examine how well the outputs satisfied the objectives of the collection activity. Reviews of effectiveness evaluate:
· Adequacy (fitness for purpose) of the statistical products or services - evaluates how well the outputs from the statistical activity meet the preset standards and criteria;
· Appropriateness of a statistics program – evaluates the relative priority and highlights the potential for improving the range of outputs from a statistical program;
· Usefulness of the statistics - evaluates whether the survey outputs and delivery methods meet key user needs; and
· New statistics – evaluates gaps in statistical data required to meet policy initiatives.
Reviews of Efficiency
In general, efficiency reviews evaluate whether the collection is the lowest cost method available to achieve the collection objectives. An efficiency review may uncover inefficiencies in the production of statistics such as duplication of work, incorrect prioritisation or inefficient use of IT resources. Efficiency reviews may also identify alternative sources of data.
Reviews of Quality
These reviews involve application of quality measures to assess the delivery of outputs. Quality reviews include close examination and evaluation of:
· Outputs against key data quality attributes discussed in Section 2.5 Data Quality framework in the Handbook
· Client satisfaction
· Survey methodology
· Standards and classifications
· Codes of good practice
· Systems which support collections
· Service standards.
In order to measure the cost/benefit of data systems, quality indicators should be developed for inputs (costs) and outcomes (benefits). When establishing a quality based assessment framework, it is important to consider not only the quality indicators but also the procedures for monitoring, reporting and providing feedback on quality.