A quick-start guide to sensor-powered quality control

This article relates to a sample form that you can view here. Save a copy to your Google Drive or download as an Excel workbook. We also held a webinar on this topic.

Last year, we released support for recording device sensor meta-data in your forms, either in a continuous stream of values reported by the device, or as convenient sensor statistics automatically saved inside the form data. While the full sensor stream data takes a bit more effort to work with (made a little easier for Stata users using this command), sensor statistics are readily available, can be collected as part of your mobile data collection at no added cost, and can be used to easily augment your data quality processes.

We suggest that you include a range of sensor statistic fields in any form that will be used for data collection using SurveyCTO Collect for Android (collecting sensor meta-data is not currently supported by our iOS app nor web forms). You can read the descriptions of those sensors statistics in the documentation so that you understand what each one does in general. However, you do not need to have a detailed understanding of their distributions to run automated quality checks to look for outliers. Outlying values are seldom insignificant, so it is justified to call them to your attention as part of your human driven review process. You can make this process even easier for the humans involved by flagging submissions for assessment in the review workflow, triggered by automatic quality checks.

Note also that sensor meta-data is just the cutting edge of an already mature set of quality control-related field types that can be included in a SurveyCTO form design. Integrating automated flagging of submissions based on sensor meta-data, as suggested in this guide, can help nudge you to pay closer attention to:

  • Text audits: a field-by-field record of the amount of time spent on each field, and the first point in time that each field appeared on screen; for detailed auditing.
  • Audio audits: an invisible audio recording with random trigger properties, for listening into survey interviews, for when you have questions about how questions were asked.
  • Speed violations: a series of timing-related triggers, to record events where fields are completed too quickly, and even trigger audio audit recordings in response.

All of these quality control fields are useful by themselves, but can generate a lot of data, so additional flags like questionable positioning in the distribution of sensor statistics can help focus your attention on the right individual submissions. (See this page to learn more about quality control features in SurveyCTO.)

1. Including sensor statistics in your forms

The first step is including sensor statistic fields in your form design. If any field-by-field or survey-module-based analysis is foreseeable, consider including sensor stream fields too. You can use the accompanying sample form as a template by simply deleting all fields on the survey sheet that follow the sensor statistic fields, adding your own fields, and changing the form title and ID on the settings sheet. 

If you prefer to work in the online form designer, you can add sensor statistic fields from the +Add hidden field tab of the field creation interface. Alternatively, change the ID of your own copy of the sample form on the settings sheet. Use the Upload form definition option on the server console's Design tab to deploy the sample, and click on the Edit button for the newly-deployed sample, and then you can work in the online designer.

Also note that the sample includes additional quality control fields, enabling greater insight into flagged cases, and the ability to drill a little deeper into what happened while filling a form. In contrast to the broad reach of automated statistical checks based on the distribution of sensor meta-data, audio audits will help give you a qualitative sense of how well an interview was conducted, and text audits will help you to analyze irregular form submissions more deeply.

The audio audit field in the sample form will record the first 60 seconds of sound in the background of each form submission. This is just for illustrative purposes though. For advice on using audio audits strategically in practice, consult this blog post.

Deploy your form design once you're happy with it, either by uploading a spreadsheet as above, or by clicking on Save and deploy under the Save menu in the online designer.

2. Set up an automated quality check

The next step is to set up an automatic quality check to flag your sensor statistic fields for outliers. You will have to set up only one automated quality check as a single check of the same type can be used with multiple fields. Follow these steps, which you can also see in the screen recording that follows:

  1. Navigate to the Monitor tab of the SurveyCTO server console.
  2. Scroll down to the Automated quality checks section (after the Form submissions and dataset data section). Turn it ON if necessary.
  3. Under the title of your form, select Options, and enable the option to Run all checks nightly. Optionally activate email reporting as well. Save the change.
  4. Click on the Checks option, and click on the button to +Create quality check.
  5. In the first step of the quality check creation wizard, under the Create a new quality check heading, pick Value is an outlier (learn about each check type here).
  6. Click on Next in the bottom right.
  7. In the field selection window that opens, check the box for each sensor statistic field. Disable automatic filtering in the top right if you need to, and use the search field to narrow results.
  8. Click on Next in the bottom right.
  9. Click on Save in the bottom right.

Watch the video above to see these steps performed in sequence.

With the above configured, automated quality checks will be triggered nightly, generating reports, which you can read about here (see the "Quality check reports" heading).

3. Set up the review workflow

With the above automated quality check configured, you can now start to use it as a criterion for flagging submissions for human review in the review workflow. In addition to whatever other random and purposeful criteria you setup for flagging submissions for individual scrutiny, you can make sure that submissions are flagged where any given sensor statistic is outside the configured interquartile range threshold. Follow these steps, which you can also see in the screen recording that follows:

  1. Navigate to the Monitor tab of the SurveyCTO server console.
  2. Scroll down to the Form submissions and dataset data section (above the Automated quality checks section).
  3. Under the title of your form, click on Review workflow.
  4. If not yet enabled, you must click on the toggle button to Enable review and correction workflow for this form.
  5. In the first option, choose to flag Some submissions for human review.
  6. The first option is to Flag incoming submissions based on results of quality checks - enable this option.
  7. Check the sub-option to A
  8. If you have other categories of quality check you want to trigger reviews, check those categories as well. 
  9. We strongly recommend using the Flag a random percentage of submissions option to complement whatever purposeful statistics-driven manual review process you will be using. Set this percentage to at least 20%.
  10. Once you're done considering other review workflow options (read more here), click on the Save button in the bottom right.

Watch the video above to see these steps performed in sequence.

4. Putting it all together

Once you have completed the steps in this guide up until this point, you're all set! As data starts to arrive on your server console, submissions will be flagged based on their place in the distribution of sensor statistic data, according to your settings. (Automated outlier checks only kick in once the sample size passes 10, so you’ll need to collect at least 10 submissions before any get flagged.)

Watch the server console's notifications in the top right for alerts (⚠️) about submissions to review. You can click on these links to take you to the form, or visit the Monitor tab and click on Review now to get started reviewing both randomly flagged submissions, and those flagged by quality checks.

Scrutinize your form submissions carefully and generously. If you are recording GPS locations, see where the submission was situated. If flagged submissions include an audio audit recording, listen to that for clues while you try to assess the trustworthiness of the submissions that are flagged for review. Also, remember that there could always be a perfectly good explanation for outlying average sensor readings — it is just a good idea to find out the reason why, and record it in your review system.

For more ideas and considerations on advanced use of sensor meta-data in combination with SurveyCTO's other data quality features, please see this more detailed article.

5. See our sensor-powered quality control webinar

Since writing this article wheld a webinar on this topic that explained what sensor meta-data is, what it looks like, and how it can be used to collect higher quality data. Find this webinar recording and other useful videos in our video library, but you can also watch the webinar here: 

Do you have thoughts on this support article? We'd love to hear them! Feel free to fill out this feedback form.


Article is closed for comments.