Survey ID Number
MWI_2014_MCC-IDPSR_v01_M
Title
Infrastructure Development and Power Sector Reform 2014-2015
Abstract
Social Impact was contracted by MCC to develop and conduct an evaluation of the Malawi Compact. Specifically, SI has been tasked to “assess the program design and implementation to develop the most rigorous evaluation design feasible, whether it is a performance or impact evaluation, and identify the most appropriate evaluation methodology feasible given the context.”
Efforts to identify a research design that would allow for a rigorously defined counterfactual were unsuccessful, and as a result this design document outlines plans for a rigorous performance evaluation that will aim to measure key outcome indicators early on in the Compact, midway through, and at the end of the Compact, as well as track changes over time. This evaluation is designed to address the core questions of the evaluation (Table 1) Since the proposed design is a performance evaluation, it is important to note that it may not be possible to state with confidence how the power sector in Malawi has changed (or not changed) as a result of the Compact, as it will not be possible to control for other potential causes of change. In some cases, however, it may be feasible to identify and potentially rule out alternative explanations.
The inability to define a counterfactual requires a reformulation of some of the initial evaluation questions originally proposed by MCC, including some core questions included in the SI-MCC contract. In addition, the Evaluation Assessment Report revealed that both SI and MCC had substantial concerns with regard to the original research questions proposed in Social Impact's contract. This is natural given the way that interventions change over time, and that the proposed questions should be feasible to answer based on the data that can be collected as part of the evaluation. Based on SI's comprehensive desk review, information gathered during the scoping trip, and frequent communication with MCC and MCA-M, the SI evaluation team has developed research questions and research approaches for the PSRP and the IDP project components, as proposed in Tables 2 and 3, respectively. The original questions and the suggested modifications for each question are presented in the Appendix.
Research Questions
Through a rigorous performance evaluation, the evaluation design aims to answer the following core evaluation questions and several complementary research questions:
1. What declines in poverty, increases in economic growth, reductions in the electricity related cost of doing business, increases in access to electricity, and increases in value added production are observed over the life of the Compact?
2. What were the results of the interventions - intended and unintended, positive or negative?
3. Are there differences in outcomes of interest by gender, age and income? Sex and income disaggregated information for businesses and households will be pursued to the extent possible.
4. What are the lessons learned and are they applicable to other similar projects?
5. What is the likelihood that the results of the Project will be sustained over time?
6. At the household level, the evaluations shall focus on the following program/project/activities impacts on household and individuals: income; expenditures, consumption and access to energy; individual time devoted to leisure and productive activities.
7. At the enterprise level, the evaluation shall focus on the potential impact of the program/project/activities on: business profitability and productivity; value added production and investment; employment and wage changes; energy consumption and sources of energy used; business losses.
8. At the regulatory, institutional and policy level, the evaluation shall explore the potential impacts of the program/project/activities on: utility operating costs and losses; financial sustainability; private investment, particularly in generation; expansion of electricity access for customers, particularly the poor.
To answer these questions, the evaluation design will leverage diverse research methodologies with different timelines for data collection. The evaluation design can be broken into three main parts, albeit with some overlap:
· IDP evaluation: The IDP design focuses primarily on an intensive metering effort to measure the technical benefits of the project, including changes in energy delivered, outages, and quality. This will be complemented by focus groups with residents of beneficiary communities.
· PSRP evaluation: The PSRP design incorporates five data collection activities, including: (1) quantitative indicators from the M&E Plan and Malawi Energy Regulatory Authority (MERA) key performance indicators, (2) workflow analyses with relevant units, such as billing and procurement, (3) a series of largely qualitative research activities (with some mini-surveys included), (4) a proposed survey of Electricity Supply Corporation of Malawi (ESCOM) employees, and (5) the PSRP process evaluation, focused on implementation and achievement of implementation milestones and outputs will be folded into the PSRP data collection activities.
· Enterprise survey: A panel survey of businesses will be used to evaluate both the PSRP and the IDP.
IDP Evaluation Design
Design Overview
We propose that the IDP evaluation consist of two major parts: (1) intensive metering to determine technical benefits, and (2) focus group discussions with beneficiaries. In addition, some of the activities conducted as part of the PSRP evaluation - specifically work flow analyses of response to outages - will also address IDP benefits made possible by the supervisory control and data acquisition (SCADA) systems.
PSRP
Social Impact proposes five data collection activities for the PSRP evaluation: (1) quantitative indicators from the M&E Plan and MERA key performance indicators, (2) workflow analyses with relevant units, (3) largely qualitative research activities (with some mini-surveys included), (4) a survey of current ESCOM employees, and (5) process evaluation. These activities will occur in three phases: at baseline (to be conducted as soon as possible), at midline, and at the end of the Compact. The evaluation will seek to identify changes over time and then consider the extent to which any observed improvements can be attributed to Compact activities.
Sampling Procedure
1. Enterprise Survey
A sampling frame of businesses can be developed from ESCOM's customer records. There are currently 832 MD customers in the ESCOM network. Of these, 448 customers are concentrated in the South; there are 310 customers in the Central region; and there are mere 66 in the North. Given the relatively low number of MD customers, it will be necessary to expand the population of interest to three-phase commercial connections, of which there are 5,389 in the ESCOM network.
The sampling strategy for the enterprise survey is yet to be finalized. Although all business consumers are identified as beneficiaries of the Compact, the benefits might vary across many of these businesses. To focus research efforts as per discussions with MCC and Compact stakeholders, non-businesses, such as government agencies, hospitals, and schools will be dropped from the sampling frame. This list may be further modified once an ongoing ESCOM customer verification program is complete which will yield a geo-referenced location for each enterprise customer. The survey will benefit enormously from this customer verification project.
Sampling could be based on a random sample from among this population; however, it might be desirable to oversample certain subgroups to ensure the evaluation's ability to generalize about sub-populations of interest and compare across these subgroups. The evaluation team initially proposed ensuring representative samples of the degree of expected Compact benefits; however, Compact stakeholders have raised concerns that it will be difficult to distinguish among beneficiaries. There are several additional variables that could be given priority in determining the evaluation's approach to sampling. These include:
· Geographical location: South, Central, North
· Industry type: manufacturing, agriculture, or services
· Electricity consumption at baseline: MD, three-phase customers
· Quality of service at baseline: industrial park customer, non-industrial park customer
Exact sample size calculations will be performed when the uncertainty about the sampling approach is resolved. However, if we assume that the evaluation will seek to make comparisons across two subgroups (e.g., high/low beneficiaries or higher/lower consumption), then the evaluation would require a survey of 1,000 enterprises across both these sub-groups in order to measure a minimum detectable effect size of 0.18 standard deviations. Given that this will be a panel study that will track the same businesses over nearly a five year time period, it is likely that there will be a high rate of attrition as businesses either fail or decline to participate in future iterations of the survey. As such, the evaluation team recommends adjusting this estimate by an additional 25% to account for expected attrition from baseline to end-line, yielding a sample of 1,250 businesses. If the study aims to ensure comparisons across three sub-groups, then an additional 625 firms would need to be added to the sample. Alternatively, Figure 8 shows the tradeoff between the minimum detectable difference between sub-groups and the sample size. At higher minimum detectible differences, lower samples would be permitted. For budgetary purposes, in the attached budget we have estimated a sample of 1,250 businesses across Lilongwe, Mzuzu, and Blantyre, with a majority of sampling in Lilongwe, where most program beneficiaries will be located.
To further refine this design, the evaluation team would need to obtain and analyze: (1) existing customer data for all MD and three-phase commercial customers, and (2) forthcoming data from the customer verification project, including GPS data and information linking connections to specific substations. Furthermore, it will be necessary to conduct interviews or focus groups with diverse types of businesses in Blantyre, Lilongwe, and Mzuzu to better understand the energy challenges that they confront and how they respond to those challenges. Finally, in consultation with MCC and other Compact stakeholders, the evaluation team will finalize the sampling strategy.
II. ESCOM Survey
The evaluation team proposes to conduct a survey of a sample of ESCOM employees, which currently number 2,570. While it would be possible to conduct a census of the population of ESCOM employees, the evaluation team will be able to make accurate inferences with a sample of employees. Employees will be randomly selected for inclusion in the sample. Selected individuals will be surveyed in person if they are in the urban areas of Blantyre, Lilongwe, or Mzuzu and by phone if they are not. The ESCOM CEO has already expressed interest in the survey, and we hope that the ESCOM leadership will encourage a high response rate from within the ranks of the utility. We calculate that a sample of 829 will be necessary. As shown in Figure 7, at a standard of .80 power, the minimum detectible effect size for a comparison of waves of the survey is estimated at .14 standard deviations. In terms of sample proportions and assuming maximum variation (50%/50%), we would need to observe approximately a 4% difference between baseline and midline to be confident that a change occurred between these two time periods (see Equation 1). For example, if at baseline we observe that 65% of the sampled ESCOM employees evaluated ESCOM customer service well and at midline this percentage rises to 69% of employees surveyed, then we could be confident that satisfaction had increased over time.
(1) Random error at 95% confidence= .04=1.645(v(.5*.5/829)(.5*.5/829))