|The Statistical Clearing House (SCH) was established by the Prime Minister in 1997 within the Australian Bureau of Statistics (ABS) to be the central clearance point for business surveys that are run by, funded by, or conducted on behalf of the Commonwealth Government. Our key objectives are to minimise the load placed on businesses by Commonwealth Government surveys, reduce unnecessary survey duplication, and ensure surveys are fit-for-purpose given the survey objectives.|
Watch this seminar to learn more about what surveys require SCH review, what is involved in a review, how SCH adds value to the survey development process and useful resources that may help you to develop your survey.
For the latest SCH news read our Newsletters and Annual Reports.
Survey Manager's responsibility
It is the Survey Manager's responsibility to initiate the SCH review process and seek SCH approval. More information on how we review your survey can be found below. You can also read more about what surveys require SCH approval, and the SCH review process. You can also submit your survey for review online.
Benefits of an SCH review
We provide advice on, and help make improvements to, the following areas:
- survey methodology, such as the sample design
- layout and design of survey instruments and questionnaires
- introductory and follow-up procedures to improve response rates
- quality of survey outputs
- interpretation of outputs from surveys with low response rates.
Our network of Statistical Liaison Officers
We maintain a network of Statistical Liaison Officers within each Commonwealth Government agency. Their role is to be our communication point with their agency, and includes:
- creating links between Survey Managers and the SCH
- advising Survey Managers of their SCH responsibilities
- promoting the SCH within their agency
Contact us if you are interested in the Statistical Liaison Officer role for your Commonwealth Government agency.
To ensure surveys minimise provider load and are fit-for-purpose.
What is 'provider load'?
'Provider load' is "the effort, in terms of time and cost, required for respondents to provide satisfactory answers to a survey".
- Actual load, which is quantifiable in terms of the frequency and length of the questionnaire and contact.
- Perceived load, although not quantifiable, is how the requirements of a survey (such as the presentation of the survey instruments, the information provided in the approach letter and other provider information, etc.) are perceived by the respondent. It is the level of cognitive effort required to participate in the survey.
How is provider load assessed?
We assess actual provider load by the frequency (e.g. monthly, quarterly, annually) and the length of time that the respondent is engaged with the survey. This can be further broken into:
- contact time, which is the number of businesses that the survey makes contact with, multiplied by the time taken to make the initial contact (usually to gauge whether the respondent is in-scope and willing to participate in the survey), and
- responding time, which is the number of businesses from which a response is actually received (including partial responses), by the time taken to respond to the survey. The responding time should include activities such as compiling and collecting records in order to be able to complete the survey.
We assess perceived provider load by looking at whether:
- an appropriate data collection mode is chosen
- the questionnaire is structured in a logical fashion that is easily followed by respondents
- the questionnaire contains all of the necessary instructions and clear definitions to reduce confusion
- the surrounding infrastructure of the collection instrument enables ease of use. Example: for a web form, it is crucial that respondents can easily access the form, save and exit the form, and then re-enter the form
- the collection instrument has been appropriately tested with respondents to ensure it will operate as expected in the field.
We assess whether appropriate measures are taken to reduce provider load where possible, including:
- sufficient investigation into alternative sources of information available and justification that no reasonable, alternative means exists of obtaining the required information with lower provider load.
- selection of methodology for the survey (e.g. sample design and estimation methods) is appropriate considering the survey's goals, constraints, and required quality. In particular, that the sample size does not exceed what is required.
What is 'fit-for-purpose'?
'Fit-for-purpose' means that the survey design, conduct, and analysis produces results that meet the survey purpose. The survey purpose will determine the level of data quality required.
How is fitness-for-purpose assessed?
Overall, we look at whether the quality of the outputs are appropriate given the objectives of the survey. In particular, we assess whether the quality of the data collected will be:
- lower than is required for its purpose. It is difficult to justify the load to businesses when the resulting quality of data is insufficient to meet the survey objectives.
- much higher than is necessary for its purpose, resulting in a higher level of provider load than is required. Example: running a census may not be necessary, when a sample of those businesses would attain the required level of data quality.
Some key questions we ask when assessing fitness-for-purpose are:
- Do the outputs described meet the purpose of the survey?
- Have standards and classifications been used where appropriate to ensure the utility and comparability of the data with similar datasets?
- Has the survey instrument been tested with respondents to ensure that the survey questions are interpreted as expected?
- Does the frame (list of businesses) provide adequate coverage of the intended population? Are there any issues, such as over or under coverage, which may affect how representative the data is of the intended population?
- Will the sample design and expected response rate assist to produce the required level of accuracy?
- Are there methods in place for dealing with issues, such as outliers and non-response, to ensure that the bias is minimised and that the data quality will not be poorer than is required?
- Are there appropriate data collection methods in place to ensure that the response rate, and consequently the accuracy, is as high as possible? Examples: are pre-approach letters used? Is there an adequate method for following up businesses who are not responding?