Start of Main Content

About the Research Methods Cluster

Empirical research requires good data to complement statistical rigor in providing answers to global poverty questions. Poor data quality can lead to biases in causal inference, lower the probability of detecting the true effect of a program, and limit the generalizability of findings to other contexts. Without good quality data, policy decisions based on such evidence may be misguided, yet economists have little methodological research to inform the data generating process.

To conceptualize how research design choices affect data quality, Dillon et al. (2020) introduce the idea of the data quality production function. To maximize knowledge produced from a given study, subject to a budget constraint, a researcher must make choices regarding research design, questionnaire design, and field protocols that affect data quality.  These choices form the themes of the Research Methods Initiative: (1) Research Design, (2) Questionnaire Design and Measurement, and (3) Fieldwork Implementation and Data Quality.

The Global Poverty Research Lab promotes methodological studies that can help researchers make more informed trade-offs and suggest best practices for data collection teams to improve survey data quality. An important part of the Research Methods Initiative is our collaboration with Innovations for Poverty Action (IPA), which leverages methodological learning across 22 country offices and 300 active studies around the world.

  • At the heart of empirical analysis is the problem of establishing causal relationships in the data that we collect and from which we can provide policy recommendations that are effective. Randomized control trials have provided an important new tool in addressing identification in research design, but we can learn more. Examples of work within this theme will include innovations in RCT designs, the implications of different sampling strategies and statistical power in research designs, and how we can design for replication and scale.

  • This theme focuses on how survey instruments are designed and the consequences of alternative choices that a researcher may make in how data is collected. Errors in measurement may bias key variables which have important consequences not only in the representation of population level characteristics, but also the empirical relationships estimated. Examples of work within this theme will include estimating relative biases in alternative questionnaire designs from recall periods, question framing, proxy rules, and alternative units of analysis. We will also explore alternative strategies and technologies to capture hard-to-measure concepts.

    Household Definitions

    How a "household" is defined in multi-topic household surveys vary between studies but have potentially significant implications for household composition, production, and poverty statistics. We have compiled the data and survey questionnaires from 500+ studies and intend to compare their different stated definitions of the household to the consumption, income, and labor information from the data sets to analyze how it affects these outcomes.

    Attention Checks in Phone and Field Surveys

    With surveys that are long or cognitively taxing, a respondent’s attention and motivation can change through the course of the survey. This can affect data quality as respondents may provide incorrect information, skip specific questions, or discontinue the survey. This study aims to measure changes in attentiveness and respondent fatigue through digit span tests and examines implications for questionnaire design.

  • This theme focuses on implementation decisions that limit non-random measurement error in data collected from phone and field surveys. Studies under this theme have the potential to yield insights on enumerator labor markets, interdisciplinary insights into personal interviewing, and practical tools to be integrated as best practices within IPA and other data collection organizations. Examples of work under this theme include exploring interviewer effects, including how recruitment, training and motivation of enumerators improves data quality, and use old and new data quality tools such as observational visits, audio audits, backchecks, high frequency checks, nightly monitoring reports, and machine learning to recognize data falsification.

  • Togo Novissi Interviewer Survey

    As COVID-related restrictions disrupted livelihoods of informal workers, the Togolese government instituted Novissi, a cash transfer aid program. This study uses the opportunity to measure Interviewer Effects, linking three available data sources: 1) phone survey data collected by the interviewers on the reach of the Novissi program; 2) responses to an interviewer survey, with information on the interviewers’ demographics, work experience, soft skills, cognitive abilities, and technological skills; and 3) administrative data through mobile money records that provide a list of beneficiaries of Novissi. The study evaluates the effect of interviewer characteristics, respondent characteristics, and respondent-interviewer interactions on data quality, measured by comparing interviewer collected survey data to mobile money records.

    Multi-country Interviewer Survey

    This study interviewed enumerators collecting data for 8 IPA phone surveys (11 survey waves). The interviewer survey collects data on the interviewers' interviewers’ demographics, work experience, soft skills, cognitive abilities, and technological skills. These are linked to the data they collected, and to data quality measures, to evaluate the effect of interviewer characteristics, respondent characteristics, and respondent-interviewer interactions on data quality.

    Interviewer Effects Using Existing Data

    Comparing survey and backcheck data from multiple completed GPRL and IPA studies, this study examines variation in data that may arise from differences in data quality based on interviewer characteristics, respondent characteristics, and respondent-interviewer interactions.

    Behavioral Messaging to Improve Phone Survey Response Rates and Panel Retention

    There has recently been growing interest in mobile phone surveys in low- and middle-income countries, but the efficacy of methods for improving response rates is not as well-known as it is for the US and Europe. This study randomizes the use of pre-survey text messages and the type of appeal made in the content of the text, to improve phone survey response rates and panel retention. The variation in the content of the text messages include appealing to a respondent’s civic motivation, self-interest with reminders about monetary compensation, and sharing important findings from previous rounds for panel surveys. The experiment is conducted in 11 random-digit dial (RDD) surveys in 10 countries, of which 5 are panel surveys.

    COVID-19 Response – Phone Survey Methods in LMICs

    As COVID-19 pushed research organizations like IPA to halt field data collection, our teams made a shift to remote data collection, mostly through Computer Assisted Telephone Interviews (CATI). This offered an opportunity to continue ongoing studies and undertake new data collection efforts that shed light on the repercussions of COVID in the developing world. As teams made this change in survey mode, many questions were raised, including data collection protocols, response rates, and data quality. To address these, IPA, with generous support from Northwestern University, has produced a wealth of open-access resources on phone survey methods that can help international research organizations adapt to a changing world.

    Research Methods Notes

    Cutting across all three themes, the Research Methods Notes are short articles aimed at practitioners with the aim of disseminating early findings and lessons from ongoing projects to a wide audience. Read our Research Methods Notes here:

  • Dillon, Andrew and Mensah, Edouard Romeo, Respondent Biases in Household Surveys (March 2021). Global Poverty Research Lab Working Paper No. 21-103

    Dillon, Andrew and Rao, Lakshman Nagraj, Land Measurement Bias: Comparisons from Global Positioning System, Self-Reports, and Remote Sensing Data (February 2021). Global Poverty Research Lab Working Paper No. 21-102

    Jayachandran, Seema, and Biradavolu, Monica and Cooper, Jan, Using Machine Learning and Qualitative Interviews to Design a Five-Question Women’s Agency Index (March 2021).

    Asiedu, Edward and Karlan, Dean and Lambon-Quayefio, Monica and Udry, Christopher, A Call for Structured Ethics Appendices in Social Science Papers (January 2021). Global Poverty Research Lab Working Paper No. 21-101

    Akogun, Oladele and Dillon, Andrew and Friedman, Jed Arnold and Prasann, Ashesh and Serneels, Pieter M., Productivity and Health: Physical Activity as a Measure of Effort (January 2020). Global Poverty Research Lab Working Paper No. 20-101

    Handbook of Agricultural Economics chapter: Agricultural Data Collection to Minimize Measurement Error and Maximize Coverage (forthcoming)

    Go to the GPRL Structured Ethics Appendix

  • Would you like to integrate a methods experiment in your upcoming study? Write to let us know which existing methods study you are interested in or suggest a new one. You can email us at or

Contact us about the Global Poverty Research Lab

Global Poverty Research Lab
601 University Place |  Scott Hall |  Evanston, IL 60208