Inviting a Conversation About Rigor in Qualitative Research

By Branda Nowell, Kate Albrecht

FROM THE JOURNALS: This occasional feature highlights a publication that’s likely to be of interest to Incubator readers. This post is drawn from “A Reviewer’s Guide to Qualitative Rigor,” from the Journal of Public Administration Research and Theory. We’re grateful to Oxford University Press and the Public Management Research Association for providing free access to the full article through June 30, 2019.

Branda Nowell is a professor and Kate Albrecht is a doctoral candidate, both in the School of Public and International Affairs at North Carolina State University.

*             *             *

What constitutes rigor in qualitative research is a conversation we’d like to advance. In our recent JPART article, we explore this notion in the context of public management. The conversation is also of central concern to the field of program evaluation, which has been a leader in advancing both qualitative and mixed methods. We’re especially interested in engaging consumers of qualitative research who are more familiar with designing, conducting, or interpreting social science research that uses quantitative methods.

In this post, we reorient the discussion about rigor, shifting from the perspective of “quantitative versus qualitative” methods (analysis of countable data versus open-ended methods such as interviews and observations) to what we think is a more helpful frame of distinguishing deductive from inductive modes of inquiry.

Think “deductive and inductive” instead of “quantitative and qualitative”

Most policy and program evaluation researchers are trained in quantitative methods. Graduate programs in economics, public policy, sociology, psychology, and others relevant to the evaluation of policies and programs most often (though not always) emphasize hypothesis testing, or whether existing theory applies in a specific context. In this approach, a hypothesis is derived from existing assumptions and tested on a data set intended to represent some population. The goal of this mode of inquiry is to gain evidence as to the applicability of that hypothesis for that population. This approach reflects a deductive mode of inquiry: “top down” or “general à specific” or “theory à data.”

Inductive modes of inquiry seek to do something quite different: The intent is to gain empirically grounded insight into a clearly defined phenomenon through careful selection and observation of information-rich cases, and thereby develop theory, frameworks, and typologies that reflect what might be going on in such cases. This mode of inquiry, which relies on naturalistic observation, is central to advancing science in many fields, including medicine. Think of inductive approaches as “bottom up” or “specific à general” or “data à theory building.”

Common aims, but unique contributions

Within most fields, deductive and inductive approaches share three aims:

  • To use a systematic process of inquiry to advance knowledge

  • To engage in inquiry-driven research design (that is, the research questions should drive the approaches and methods used, not the other way around)

  • To position a research question in the context of a broader field of scholarship, identifying and addressing gaps in knowledge

Inductive methods are increasingly acknowledged for their comparative advantage in three areas:

Advancing new theory and discovering nuance in existing theory. Program evaluation examines relationships identified within a program’s theory of change or logic model. Whereas deductive approaches are powerful for confirming existing assumptions about a program, inductive approaches shine at offering initial empirical grounding about these patterns. While deductive approaches may or may not focus on outliers or disconfirming cases, inductive approaches always consider these cases consequential. Such cases prompt the scholar to seek explanations for variation in patterns and experiences that can add greater depth and nuance to the interpretation of program impacts.

Developing new frameworks and typologies. In program evaluation, we often seek to measure abstract elements of the theory of change, such as leadership, capacity, learning, and effectiveness. What these notions mean can vary dramatically by context. Inductive modes of inquiry are particularly powerful tools for documenting variation across settings and participants in terms of these concepts.

Understanding the mechanisms underlying statistical associations. Theories of change describe the intended or hoped-for relationship between actions and outcomes, but the processes that underlie these relationships are often unclear and vary by program site. With inductive modes of inquiry, program evaluators can begin to uncover the mechanisms and processes by which program inputs and strategies resulted in (or failed to result in) program outcomes.

Elements of rigor in inductive approaches

While inductive and deductive modes of inquiry have common aims, they use different approaches. Therefore, criteria for rigor relevant in deductive modes of inquiry such as generalizability and replicability are inappropriate criteria for inductive inquiry. How, then, can we establish and weigh the rigor of inductive approaches? As we note in our article, a number of scholars have considered these issues.

Rigorous inductive approaches aim to show that the interpretation of the data is credible by clearly describing protocols; demonstrating the connection between those protocols, the study objectives, and the analysis tradition used in the study (for example, grounded theory or ethnography); and describing how the researchers implemented data collection and reviewed data quality.

  • Research questions should be driven by a desire to expand our knowledge of a phenomenon, rather than to test the generalizability of a theory.

  • How cases are selected is of paramount importance to rigorous inductive design; it should never be simply a convenience or random sample. By selecting a case or an informant for inclusion in the study, an inductive researcher is arguing that the case is rich in information and uniquely important for gaining insight into the phenomenon of interest. Because of the in-depth nature of the inquiry, a small number of cases is generally necessary and appropriate.

  • Negative cases can and often should be included. For example, if an evaluator is interested in how a mentoring program influences career advancement, including cases in which career advancement failed to occur despite quality mentoring may add nuance and dimension to the conditions under which the program is likely to succeed.

  • The design, analysis, and reporting phases of inductive research rely on establishing trustworthiness, rather than inter-rater reliability (the degree to which two or more observers score a behavior or phenomenon the same) as in deductive approaches. (See our JPART article for deeper discussion.) Trustworthiness is based on elements of the research process like prolonged engagement with the cases and context; “member checking,” where the researcher summarizes her understanding of respondents’ views during or after the interview; reaching consensus on coding among research team members; keeping detailed notes for mapping interpretations as the research evolves; and reporting major findings with thick description of examples and providing quotations or excerpts.

*             *             *

We present these elements of inductive and deductive approaches as conversation starters, seeking to span what we see as a gap in understanding about qualitative approaches. Grasping the distinctions can help program evaluators trained in quantitative methods assess the rigor of findings from case studies, interviews, or focus groups — and can help qualitative scholars field questions about sampling, generalizability, or inter-rater reliability while presenting the contributions of their research.

Suggested citation for this post:

Nowell, Branda, and Kate Albrecht. 2019. “Inviting a Conversation About Rigor in Qualitative Research.” Implementation Research Incubator (blog), April. https://www.mdrc.org/publication/inviting-conversation-about-rigor-qualitative-research.