This article is part of our guide to computer-assisted telephone interviewing (CATI). For context, start with part 1 of this guide. Download and make copies of the resources for this guide stored here.
Currently, most countries across the world are taking measures to ensure health and safety for all, which is posing new challenges to data collection processes. If you've chosen CATI as an alternative to face-to-face interviews, you’ll also need to think on how to train your team remotely. You won’t be able to conduct face-to-face training or easily distribute equipment, raising the need for new materials to build and assess your team members’ knowledge at distance. In this article, we provide some resources for your remote training, including tutorial slides, quizzes, and mock interviews. Feel free to explore and tailor them as you see fit!
Setting up SurveyCTO Collect
One of the first preparation steps for a remote team member is to install SurveyCTO Collect on their device. While installing SurveyCTO Collect isn't hard, there are a few settings to configure, and specific instructions can help ensure all team members have the correct setup.
To help your team get started, we have prepared instructions in presentation files, optimized for small screens. Customize these templates to fit your needs, and then send them to your enumerators so they can be guided through the first steps:
Note: In both slide shows, slide 8 involves settings that are already selected by default. We've included it simply to control for cases where these settings were changed previously. Feel free to remove slide 8! (You'll also want to remove "Again" from slide 9.) |
Quizzes
It is generally a good idea to assess an enumerator's knowledge of the data collection process, no matter the data collection method. Self-grading quizzes can be a good time investment, especially if it is a test you will run more than once. A self-grading feature can save you time from having to assess each new round of enumerator's. You can then later export data from your self-grading assessment form to see who passed and who didn't. Here is how you can set up a quiz:
- Create a group of close-ended questions - they can be select_one, select_multiple, integer, or text fields, as long as there is only one correct answer.
- After each question, create one calculate field. This field evaluates 1 (TRUE) if the answer was correct, and 0 (FALSE) otherwise. The calculation expression should be something like
if(${fieldname}='ans', 1, 0)
, where ‘ans’ is the correct answer. - At the end, create a calculate field that sums all individual calculate fields. This will give you the total number of correct answers.
You can see how this works using this sample form.
If you’re worried about case sensitivity on step 2., check out our new upper() and lower() functions. |
Mock Interviews
Finally, we would like to suggest an approach to conducting mock interviews. This system is a great way to assess the readiness level of your enumerators to conduct a real phone interview. At this stage, they will need to be comfortable using SurveyCTO Collect, and have solid knowledge of the questionnaire and the survey workflow.
In our example, we are using the following instruments and features:
- The advanced CATI starter kit sample form.
- Publishing the above form data to Google Sheets.
- A Google Document that includes scripts for the mock interviews.
- A Google Sheet that includes:
- A dataset with the "correct" data to be collected when each mock interview script is followed (sheet “correct_data”),
- Published form data from step 2, which is data collected by the trainees during their mock interviews (sheet “data”),
- A sheet that calculates the differences between the above datasets, displaying how many fields were answered correctly by the enumerators (sheet “assessment”).
A mock interview example
To help you understand the above, I'll discuss an example: Fieldworker A fills out your CATI form while speaking to a trainer, and the trainer is responding based on script 1. When the form is submitted to the server, the data is published into the "data" sheet of the Google Sheet. That new row that appears on the "data" sheet then gets assessed and scored on the "assessment" sheet based on the number of "correct" answers on the "correct_data" sheet.
Customizing the mock interview sample
There are a few considerations to keep in mind when adjusting this example. Most importantly, always make sure that the sheets “data”, “correct_data”, and “assessment” only contain relevant columns. This will make it easier to read, match, and evaluate mock interviews. Which columns are relevant will depend on how and what you would like to evaluate using this tool. For assessment purposes, try to eliminate fields that are redundant so that your sheets are easy to read and understand.
In this example, while we are not evaluating datetime fields related to rescheduling interviews, this might be relevant to you, particularly if your project has set a specific rule related to this. Additionally, we have not included text fields (e.g. names) because the formulas used are case sensitive and could easily flag an incorrect answer by mistake. Depending on your requirements, these types of fields might also be important to evaluate.
Alternative approaches
The above is just one example of how you can automatically score mock interviews, but there are other approaches you can use. We wanted to provide you with some alternatives, so you can select the one that suits you best. Of course, you may figure out a completely different way to approach this! Below you can find two additional alternatives. The first is similar to the above method, but involves pre-loaded data from a “correct data” dataset. The second method is for those using Stata.
Pre-loading data
Instead of comparing two datasets after the mock interviews are submitted and sent to the server, you can calculate the enumerator's score directly in your form design. This way, all scores are stored on the server, and they can be exported or published from there. Here are the steps:
- Save your “correct data” as a CSV file and attach it to the form definition. You can do this in one of two ways: a.) Directly attach it to your form, or a.) Upload it to a server dataset and attach the dataset to your form.
- Update your form design:
- Create an ID that will uniquely identify each mock interview. Create a calculate field that concatenates the caseid with the attempt number (“call_num”):
concat(${caseid}, ${call_num}
. - After each field you want to evaluate, create a calculate field that assesses whether the answer was correct or not, based on the ID. This is the same method used in the quiz, but you’ll pre-load the correct value for the field, from the “correct data”:
if(${fieldname}=pulldata(‘correct_data’, ‘fieldname’, ‘id’, ${id}), 1, 0)
. This will return 1, if the expression is true, hence the answer was correct, or 0 if false. - At the end of the form, create a calculate field that sums all the above individual calculate fields. To calculate a score in percentage, you can use the following formula: (sum of all calculate fields div total number of fields evaluated)*100.
- Create an ID that will uniquely identify each mock interview. Create a calculate field that concatenates the caseid with the attempt number (“call_num”):
While the above changes to the design won’t be helpful for your data collection, they will not be harmful either, so it’s okay if you use this same form for both the training and the actual interviews. You have two alternatives:
- Instead of updating the original form definition in your server with the changes described, copy and deploy a new one with a new form title and ID, which will only be used for training purposes.
- Alternatively, at the end of the training, ignore the additional variables that assess responses for training. Users of your form won't notice extra calculate fields anyway. If you happen to use Stata or a similar application, you can script the dropping of such variables that aren't required, as part of data cleaning. Lastly, if you ever need to train late additions to the team, you can use the same training approach as before.
Optional: If you’ve chosen the first alternative, you can also add a note field with the "thankyou" appearance that displays the score to the enumerator after they have submitted the form. That way, they know how well they did right away. |
Using Stata
For those who use Stata, there is an alternative method to evaluate mock interviews without Google Sheets. Innovations for Poverty Action (IPA) developed a Stata command, cfout, that compares two datasets. You can use this instead, to compare the “correct data” with the one collected. You will need:
- Two datasets in dta (Stata) format format. Just like before, you would need to have the “correct data” and the exported data, i.e., data entered by the trainee.
- These two datasets must have a unique identifier in common. Although the caseid is a unique identifier in the cases dataset, it is not in the final dataset, because you can have up to 6 contact attempts (submissions) from the same case. Our suggestion is to create an ID that concatenates the caseid with the contact attempts (call_num). In Stata, you can do this typing the following command:
gen id=caseid+call_num
. - You should also specify the relevant columns/variables you want to compare, as mentioned in the section above.
Below, you can find a script example to use the command and automatically score your mock interviews:
*Install the cfout command
ssc install cfout
*Import your “correct data”
use “C:\Users\username\Documents\Correct data.dta”
*Use the command option that saves a dataset with all comparisons (saving()). The id variable should be the one we created in step 2. above
cfout varlist using “C:\Users\username\Documents\Form Title_WIDE.dta”, id(var) saving(differences, all(diff))
*Open the new dataset “differences”, created by the previous command
clear all
use “C:\Users\username\Documents\differences.dta”
*Calculate a new column that displays what should be the correct answer and what was the actual answer, whenever these two are different
gen diff_="Correct:"+ Master+"; "+"Entered:"+ Using if diff>0
*Calculate the total values compared by submission
by id: gen total=_N
*Calculate the total wrong answers by submission
by id: egen wrong=sum(diff)
*Calculate the score, in percentage
by id: gen score=((total-wrong)/total)*100
*Drop irrelevant columns
drop diff Master Using total
*Reshape your dataset so that each row corresponds to one submission
reshape wide diff_, i(key) j(Question) string
If you run this script, you will end up with a dataset that shows the score of each submission, including one column for each variable you used for comparison. These columns will show the differences between the “correct data” and the data collected, when they exist.
We hope this will help you engage with your enumerators at distance!
Do you have thoughts on this guide? We'd love to hear them! Feel free to fill out this feedback form.
0 Comments