How to Prepare for an Interview

careers-page-banner-2

The core of the first interview at IDinsight is a case study about evaluating the social impact of a hypothetical client’s program. You will be asked to critically assess the validity of different types of evidence and how you would communicate the strengths and shortcomings of those types of evidence to the client. You will be awarded points on your ability to think critically and creatively about the problem and to construct a logical and concise argument to support your position. While there are no “right” answers to the case study – and in fact you will receive extra points for out-of-the-box solutions – you may find it useful to review materials that explain how to identify the causal impact of a policy on social outcomes of interest, ahead of the interview. Some suggested (and entirely optional) readings include:

– Past IDinsight client engagements on our Reports webpage and how we balanced rigor with practical constraints

– IDinsight’s unique approach to decision-focused evaluation in this 3ie video presentation (or alternative, read the 3ie Working Paper)

– This summary of causal methods in impact evaluation. Don’t fixate on the terminology of each method; in the interview, points will be awarded on your ability to dissect an evaluation design, not on your ability to apply the correct label. After reviewing this summary, if the material is still new or unfamiliar to you, you may find it useful to review these excellent short primers from the Running Randomized Evaluations website:

— Module 2.1: Measuring causal impact

— Module 2.2: Advantages and limitations of non- and quasi-experimental methods

— Module 2.3: What is randomization and how does it solve the causality problem?

— Module 2.4: Benefits and limits of randomization


To give you a sense of the type of questions that may come up during the interview, here is an excerpt from last year’s case study:

Imagine that you work at IDinsight and that your client is the director of an agriculture NGO in Bihar (India). The director wants to increase crop yields among smallholder rice farmers in Bihar, and she is contemplating expanding a program that distributes fertilizer directly to rice farmers.

Example Question 1:

During your first meeting, the director tells you that her NGO already distributed fertilizer to 100 rice farmers a few months ago. Compared with rice farmers that did not receive fertilizer, the farmers that received fertilizer have approximately the same levels of yields per acre. She is concerned that this evidence suggests that the program is not effective at improving yields, and so she is considering scrapping the program, but she wants to hear from you first: Is this sufficient evidence for the director to declare that this program is a failure?

Example Question 2:

Suppose that rice farmers were selected to receive fertilizer based on the size of their farm: Any farm smaller than 2 acres received fertilizer, and all farms larger than 2 acres did not receive fertilizer. After some analysis, you find that crop yields increased more on farms right below the 2-acre cutoff than on farms right above the cutoff. How would you interpret these results: Do they tell us anything meaningful about the impact of the program?

Example Question 3:

Suppose that you are analyzing the data from the evaluation, and you learn that 10% of farmers in the control group used fertilizer on their farm. How, if at all, will this affect your ability to estimate the causal impact of the program, and what can you do about it?


The only materials that you need during the interview are a calculator and a pencil/paper.

If you are interested in learning more about causal inference, field experiments, and the types of research methods that we use at IDinsight, check out the excellent resources below. (Please note that you are NOT expected to review these ahead of the interview!)

Running Randomized Evaluations by Rachel Glennerster and Kudzai Takavarasha, as well as research studies conducted by professors affiliated with the Abdul Latif Jameel Poverty Action Lab, Innovations for Poverty Action, and similar institutions. The book’s website also includes excellent supplementary resources, lecture slides, and exercises.

MITx’s free online course 14.740x: Foundations of Development Policy

Field Experiments by Alan S. Gerber and Donald P. Green

Mastering Metrics by Joshua D. Angrist and Jörn-Steffen Pischke

Impact Evaluation in Practice (2nd Edition) by Paul Gertler et al. The World Bank also hosts the excellent technical blog, Development Impact, which regularly posts on the most cutting-edge tools in impact evaluation

Handbook on Impact Evaluation: Quantitative Methods and Practices by Shahidur Khandker et al at the World Bank.