TZA_2016_EQUIPIE-ML_v02_M
Education Quality Improvement Programme Impact Evaluation Midline Survey 2016
Name | Country code |
---|---|
Tanzania | TZA |
The EQUIP-T Impact Evaluation Midline Survey is the second of three rounds of the EQUIP-T impact evaluation and was conducted in 2016. The baseline survey was conducted in 2014 and the second and final follow-up survey (EQUIP-T Impact Evaluation Endline Survey) will be conducted in 2018. The EQUIP-T Impact Evaluation is designed and implemented by Oxford Policy Management Ltd.
The EQUIP-T Impact Evaluation is designed to measure impact of the EQUIP-T programme over time on pupil learning and selected teacher behaviour and school leadership and management outcomes.
Education Quality Improvement Programme in Tanzania (EQUIP-T) is a Government of Tanzania programme, funded by UK DIFD, which seeks to improve the quality of primary education, especially for girls, in seven regions of Tanzania. It focuses on strengthening professional capacity and performance of teachers, school leadership and management, systems which support district management of education, and community participation in education.
The independent Impact Evaluation (IE) of EQUIP-T is a four-year study funded by the United Kingdom Department for International Development (DFID). It is designed to: i) generate evidence on the impact of EQUIP-T on primary pupil learning outcomes, including any differential effects for boys and girls; ii) examine perceptions of effectiveness of different EQUIP-T components; iii) provide evidence on the fiscal affordability of scaling up EQUIP-T post-2018; and iv) communicate evidence generated by the impact evaluation to policy-makers and key education stakeholders.
The research priorities for the midline IE are captured in a comprehensive midline evaluation matrix (see Annex B in the 'EQUIP-Tanzania Impact Evaluation. Midline Technical Report, Volume I: Results and Discussion' under Reports and policy notes). The matrix sets out evaluation questions linked to the programme theory of change, and identifies sources of evidence to answer each question-either the quantitative survey or qualitative research, or both. It asks questions related to the expected results at each stage along the results chain (from the receipt of inputs to delivery of outputs, and contributions to outcomes and impact) under each of the programme's components. The aim is to establish: (i) whether changes have happened as expected; (ii) why they happened or did not happen (i.e. whether key assumptions in the theory of change hold or not); (iii) whether there are any important unanticipated changes; and (iv) what links there are between the components in driving changes.
The main IE research areas are:
The IE uses a mixed methods approach that includes:
A quantitative survey of 100 government primary schools in 17 programme treatment districts and 100 schools in 8 control districts in 2014, 2016 and 2018 covering:
Standard three pupils and their parents/caregivers;
Teachers who teach standards 1-3 Kiswahili;
Teachers who teach standards 1-3 mathematics;
Teachers who teach standards 4-7 mathematics;
Head teachers; and
Standard two lesson observations in Kiswahili and mathematics.
Qualitative fieldwork in nine research sites that overlap with a sub-set of the quantitative survey schools, in 2014, 2016 and 2018, consisting of key informant interviews (KIIs) and focus group discussions (FGDs) with head teachers, teachers, pupils, parents, school committee (SC) members, region, district and ward education officials and EQUIP-T programme staff.
The midline data available in the World Bank Microdata Catalog are from the EQUIP-T IE quantitative midline survey conducted in 2016. For the qualitative research findings and methods see 'EQUIP-Tanzania Impact Evaluation. Midline Technical Report, Volume I: Results and Discussion' and 'EQUIP-Tanzania Impact Evaluation. Midline Technical Report, Volume II: Methods and Supplementary Evidence' under Reports and policy notes.
Sample survey data [ssd]
Version 2.2: Edited, anonymous dataset for public distribution.
2021-11
Version 2.2 consists of four edited and anonymised datasets (at school, teacher, pupil and lesson level) with the responses to a small number of questions removed (see 'List of Variables Excluded from EQUIP-T IE Midline Survey Datasets' provided under Technical Documents); these were removed due to data quality issues or because no or only incomplete records existed. The datasets also contain selected constructed indicators prefixed by n_. These constructed indicators are included to save data users time as they require complex reshaping and extraction of data from multiple sources (but they could be generated by data users if preferred). Note that the first version of the archived dataset did not include the data from the pupil learning assessment (which were kept confidential until the completion of the impact evaluation in 2020). This second version of the public datasets includes the data from the pupil learning assessment conducted at midline. The archived pupil dataset and associated questionnaire 'EQUIP-T IE Pupil Background and Learning Assessment (PB) Midline Questionnaire' have therefore been updated in this new version.
The following variables were added:
All variables from p_a1_1 to p_k6_021 (these are the variables related to the learning assessment that we had kept confidential at midline)
The following variables that were constructed by the OPM analysis team were also added:
· perraschK
· n_p_perfbandK
· perraschM_miss
· n_p_perfbandM
· n_sc_povertyscore
· n_sc_belowpoverty
The weight variables are: 'weight_school' and 'weight_pupil'
The scope of the EQUIP-T IE Midline Survey includes:
HEAD TEACHER/HEAD COUNT/SCHOOL RECORDS: Head teacher background information, qualifications, frequency/type of school planning/management in-service training received, availability and contents of whole school development plan, existence and types of teacher performance rewards and sanctions, frequency of staff meetings, ward education coordinator supervision and support to the school, head teacher motivation, head teacher attendance, reasons for head teacher and teacher absenteeism (reported by head teachers), teacher attendance (from school records and by headcount on the day of the survey), teacher punctuality, pupil attendance (from school records and by headcount on the day of the survey), pupil enrolment, availability of different types of school records, school characteristics, infrastructure and funding, receipt of in-kind resources.
STANDARD 3 PUPILS: Pupil background information, Kiswahili Early Grade Reading Assessment (EGRA) and Early Grade Mathematics Assessment (EGMA) based on standards 1 and 2 national curriculum requirements. Note: The same pupils were assessed in both Kiswahili and mathematics.
PARENTS OF SAMPLED STANDARD 3 PUPILS: household and parental characteristics, household assets.
TEACHERS WHO TEACH STANDARDS 1-3 KISWAHILI AND/OR MATHEMATICS: Interview including background information, qualifications, frequency/type of in-service training received, frequency/nature of lesson observation and nature of feedback, frequency/nature of performance appraisal and teacher motivation.
TEACHERS WHO TEACH STANDARDS 1-3 KISWAHILI: Kiswahili subject knowledge assessment (teacher development needs assessment) based on the primary school Kiswahili curriculum standards 1-7 but with limited materials from standards 1 and 2.
TEACHERS WHO TEACH STANDARDS 1-3 MATHEMATICS: Mathematics subject knowledge assessment (teacher development needs assessment) based on the primary school mathematics curriculum standards 1-7 but with limited materials from standards 1 and 2.
TEACHERS WHO TEACH STANDARDS 4-7 MATHEMATICS: Mathematics subject knowledge assessment (teacher development needs assessment) based on the primary school mathematics curriculum standards 1-7 but with limited materials from standards 1 and 2.
LESSON OBSERVATION: Standard 2 Kiswahili and mathematics lesson observations of inclusive behaviour of teachers with respect to pupil gender, spatial inclusion, key teacher practices in the classroom, availability of lesson plan, availability of seating, textbooks, exercise books, pens/pencils during the lesson.
Topic | Vocabulary |
---|---|
Education | World Bank |
Primary education | World Bank |
The survey is representative of the 17 EQUIP-T programme treatment districts. The survey is NOT representative of the 8 control districts. For more details see the section on Representativeness in 'EQUIP-Tanzania Impact Evaluation. Final Baseline Technical Report, Volume I: Results and Discussion' and 'EQUIP-Tanzania Impact Evaluation. Final Baseline Technical Report, Volume II: Methods and Technical Annexes' under Reports.
The 17 treatment districts are:
The 8 control districts are:
District
Name |
---|
Oxford Policy Management Ltd |
Name |
---|
Department for International Development UK |
Because the EQUIP-T regions and districts were purposively selected (see 'EQUIP-Tanzania Impact Evaluation. Final Baseline Technical Report, Volume I: Results and Discussion' under Reports and policy notes), the IE sampling strategy used propensity score matching (PSM) to: (i) match eligible control districts to the pre-selected and eligible EQUIP-T districts (see below), and (ii) match schools from the control districts to a sample of randomly sampled treatment schools in the treatment districts. The same schools are surveyed for each round of the IE (panel of schools) and standard 3 pupils will be interviewed at each round of the survey (no pupil panel).
Identifying districts eligible for matching
Eligible control and treatment districts were those not participating in any other education programme or project that may confound the measurement of EQUIP-T impact. To generate the list of eligible control and treatment districts, all districts that are contaminated because of other education programmes or projects or may be affected by programme spill-over were excluded as follows:
Sampling frame
To be able to select an appropriate sample of pupils and teachers within schools and districts, the sampling frame consisted of information at three levels:
The sampling frame data at the district and school levels was compiled from the following sources: the 2002 and 2012 Tanzania Population Censuses, Education Management Information System (EMIS) data from the Ministry of Education and Vocational Training (MoEVT) and the Prime Minister's Office for Regional and Local Government (PMO-RALG), and the UWEZO 2011 student learning assessment survey. For within school level sampling, the frames were constructed upon arrival at the selected schools and was used to sample pupils and teachers on the day of the school visit.
Sampling stages
Stage 1: Selection of control districts
Because the treatment districts were known, the first step was to find sufficiently similar control districts that could serve as the counterfactual. PSM was used to match eligible control districts to the pre-selected, eligible treatment districts using the following matching variables: Population density, proportion of male headed households, household size, number of children per household, proportion of households that speak an ethnic language at home, and district level averages for household assets, infrastructure, education spending, parental education, school remoteness, pupil learning levels and pupil drop out.
Stage 2: Selection of treatment schools
In the second stage, schools in the treatment districts were selected using stratified systematic random sampling. The schools were selected using a probability proportional to size approach, where the measure of school size was the standard two enrolment of pupils. This means that schools with more pupils had a higher probability of being selected into the sample. To obtain a representative sample of programme treatment schools, the sample was implicitly stratified along four dimensions:
Stage 3: Selection of control schools
As in stage one, a non-random PSM approach was used to match eligible control schools to the sample of treatment schools. The matching variables were similar to the ones used as stratification criteria: Standard two enrolment, PSLE scores for Kiswahili and mathematics, and the total number of teachers per school.
The midline survey was conducted for the same schools as the baseline survey (a panel of schools) and the endline survey in 2018 will cover the same sample of schools. However, the IE does not have a panel of pupils as a pupil only attends standard three once (unless repeating). Thus, the IE sample is a repeated cross-section of pupils in a panel of schools.
Stage 4: Selection of pupils and teachers within schools
Pupils and teachers were sampled within schools using systematic random sampling based on school registers. The within-school sampling was assisted by selection tables automatically generated within the computer assisted survey instruments.
Per school, 15 standard 3 pupils were sampled. For the teacher development needs assessment (TDNA), in the sample treatment schools, up to three teachers of standards 1 to 3 Kiswahili, up to three teachers of standards 1 to 3 mathematics; and up to three teachers of Standards 4-7 mathematics were randomly sampled. For the teacher interview sampling, one change was made at midline, instead of sampling up to three teachers of Standards 1-3 all of them were interviewed to boost the sample size as many schools are small.
Replacement sample
At baseline, if a selected school could not be surveyed it was replaced. In the process of sampling, the impact evaluation team drew a replacement sample of schools, which was used for this purpose (reserve list) and the use of this list was carefully controlled. Five out of the 200 original baseline sample schools were replaced during the fieldwork. At midline, all of the 200 schools surveyed at baseline were visited again (no replacements).
Sample sizes
The actual sample sizes at midline are:
Representativeness
The results from the treatment schools are representative of government primary schools in the 17 EQUIP-T programme treatment districts. However, the results from the schools in the 8 control districts are NOT representative because these districts were not randomly sampled but matched to the 17 treatment districts using propensity score matching (see above).
Unit response
Item response
Item response rates were generally high. For the intended number of observations for the indicators presented in the 'EQUIP-Tanzania Impact Evaluation. Midline Technical Report, Volume I: Results and Discussion', see Section 1.3.3 ML quantitative survey instruments and sample, and for the actual number of observations see Annex F Detailed statistical tables of results from programme treatment districts in 'EQUIP-Tanzania Impact Evaluation. Midline Technical Report, Volume II: Methods and Supplementary Evidence'.
The survey is only representative of the EQUIP-T programme treatment area and therefore survey weights were only constructed for schools, pupils, teachers in the treatment group (not for the control group).
To obtain results that are representative of the EQUIP-T programme treatment areas, treatment estimates should be weighted using the provided survey weights that are normalised values of the inverse probabilities of selection into the sample for each unit of analysis. The relevant probabilities of selection differ depending on whether analysis is carried out at school, pupil or teacher level, and survey weights for each of these units of analysis are included in the datasets.
School weights (treatment group only)
The probability of being selected of each school depended on the total number of schools being selected and its size relative to the total number of enrolled pupils across all schools in the programme areas. Formally, the probability of a given school being selected into the sample equals the total number of schools sampled multiplied by the ratio of the number of pupils in the given school and the total number of pupils in all schools in the relevant programme areas. The school weights are appropriately normalised inverses of these probabilities.
Note: Refer to the end of this section for the strata, weights and finite population correction factor variables included in the dataset.
Pupil weights (treatment group only)
15 standard 3 pupils were randomly sampled at each school. The probability of selection of a pupil in a given school equals the school weight (defined above) multiplied by the ratio of the number of pupils selected per school (15 in all schools except in the schools that had less than 15 pupils present on that day) and the total number of eligible pupils in the given school. The pupil weights are appropriately normalised inverses of these probabilities.
Note: Refer to the end of this section for the strata, weights and finite population correction factor variables included in the dataset.
Teacher weights (treatment group only)
The probability of selection of a teacher in a given school equals the school weight (defined above) multiplied by the ratio of the number of teachers that were selected for a given teacher instrument per school and the total number of teachers eligible for the given instrument. The teacher weights are appropriately normalised inverses of these probabilities.
NOTE:
Note: Refer to the end of this section for the strata, weights and finite population correction factor variables included in the dataset.
Stratification, clustering and finite population corrections
The survey weights should be used within a survey set-up that takes into account stratification, clustered sampling and finite population corrections.
Stratification during sampling was used at the primary sampling level, that is, at school level, and not at the lower levels (pupil and teacher). For the estimation set-up, strata for schools are defined by districts and teacher-body size terciles. Although, during sampling, schools were implicitly stratified by primary school leaving examination (PSLE) scores as well, this is a continuous variable that cannot be used to define strata in the estimation set-up.
Clustering is only relevant for pupil and teacher level data, as schools were the primary sampling units within the eligible programme treatment districts. School pupil data is also hierarchical in nature with pupils clustered within schools. Hence, for pupil and teacher estimates, clustering is set at the school level.
Because large proportions of the total eligible population were sampled in many schools at the teacher and pupil levels, the estimation set-up should also account for the finite population correction (FPC) factor. This FPC factor is the square root of the ratio of the population from which the sample is drawn minus the size of the sample and the population from which the sample is drawn minus one. In the case of school level data, the FPC factor is constant across all schools, as the sample of schools was drawn from a constant population of all eligible schools in the programme treatment areas. However, for teacher and pupil level data, the FPC factor changes depending on the school, as population sizes and, in the case of teacher level data, sample sizes vary as well.
Stratification, weight, finite population correction and treatment status variables
In the EQUIP-T IE datasets the stratification, weight, FPC and treatment status variables are as follows:
The strata variable is: strata
The school weights variable is: weight_school
The school finite population correction factor is: fpc_school
The pupil weight variable is: weight_pupil
The pupil finite population correction factor is: fpc_pupil
The teacher interview weight variable is: weight_tchint
The teacher interview finite population correction factor is: fpc_tchint
The teacher development needs assesment (TDNA) weight variable is: weight_tdna
The teacher development needs assesment (TDNA) finite population correction factor is: fpc_tdna
The teacher roster weight variable is: weight_teacherroster
The teacher roster finite population correction factor is: fpc_teacherroster
The treatment status variable is: treatment where 0=control school and 1=treatment school.
The enumerators administered all of the instruments using Computer Assisted Personal Interviewing (CAPI), except for the teacher development needs assessments (TDNAs) which were administered on paper, as these take the form of mock pupil tests which teachers mark. All instruments were translated into Kiswahili and administered to all respondents in Kiswahili.
The midline survey round uses a set of survey instruments that retain most of the baseline questions but with some additions to take into account changes in programme context and design and focus of programme implementation. The main changes to instruments are:
Standard 3 pupil Kiswahili and maths test (same pupils tested in both Kiswahili and maths)
Parents of Standard 3 tested pupil interview (poverty score card)
Standards 1, 2 and 3 teacher interview
Standards 1, 2 and 3 teacher development needs assessment (TDNA) Kiswahili
Standards 1, 2 and 3 teacher development needs assessment (TDNA) maths
Standards 4-7 teacher development needs assessment (TDNA) maths
Head teacher interview and data collection from school records
Standard 2 Kiswahili and maths lesson observations
Headcount observation
Pre-tests
The revisions to the baseline instruments were trialled during two midline pre-tests which took place in November 2015 and February 2016.
The first ML pre-test took place in Kinondoni district (Dar es Salaam) on November 24th-25th 2015. A small team of two OPM staff members and three enumerators from the BL survey visited 2 schools to: i) test the functionality of the updated electronic questionnaires in the updated CAPI software (Surveybe); ii) gather information on how the change in government, the introduction of the Literacy and Numeracy Educational Support (LANES) Programme in 2015 and the resulting change in Standard 1 and 2 curriculum were affecting primary education at school level.
A second, full pre-test of all instruments and protocols took place from the 8th to 12th of February 2016 in Kisarawe District, Pwani region. A team of 15 (five OPM staff, one OPM intern, seven enumerators, a DFID representative, and an education professor from the University of Dar es Salaam who is a senior member of the IE team) visited four schools, following two days of classroom-based training. The pre-test resulted in the following outcomes:
Start | End | Cycle |
---|---|---|
2016-04-15 | 2016-05-27 | Midline |
Name |
---|
Oxford Policy Management Ltd |
Quality control and data checking protocols
At the end of each working day, supervisors collected all interview files from their team members and uploaded them into a shared and organised Dropbox folder that was set up by the data manager. The data manager would receive all files from all 8 teams and export them into Stata data files (a statistical programme) and then run daily checks on all files to make sure they are complete and identify potential errors. Several mechanisms were put in place in order to ensure high quality of the data collected during the survey. These are briefly summarised in turn below.
Selection and supervision of enumerators
As discussed above, each enumerator was supervised at least once by the training team during the training, piloting and first week of data collection. This allowed a well-informed selection of enumerators and their allocation into roles matching individual strengths and weaknesses.
CAPI built-in routing and validations
One important quality control means in CAPI surveys are the use of automatic routing and checking rules built into the CAPI questionnaires that flag simple errors during the interview, i.e. early enough for them be corrected during the interview. In each CAPI instrument, validations and checks were incorporated in the design in order to significantly reduce errors and inaccuracies during data collection. In addition to having automatic skip patterns built into the design to eliminate errors resulting from wrong skips, the CAPI validations also checked for missing fields, out of range values and inconsistencies within instruments.
Secondary consistency checks and cleaning in Stata
The ML survey exploited another key advantage of CAPI surveys, the immediate availability of data, by running a range of secondary consistencies checks across all data on a daily basis in Stata. Data received from the field was exported to Stata the following day, and a range of do-files were run to assess consistency and completeness, and make corrections if necessary. The checks comprised the following: ID uniqueness and matching across instruments; completeness of observations: target sample size versus actual; and intra- and inter-instrument consistency and out of range checks. The data manager ran the checking do-file on a daily basis on the latest cleaned data. This would return a list of potential issues in the long format which the data manager would then investigate and undertake the necessary cleaning actions. Whenever any issue was flagged, effort to obtain an explanation was undertaken either by reviewing enumerator comments or phoning teams. In addition to the checking and cleaning process, all enumerator comments as well as other specify variables were translated from Kiswahili to English. All translated entries were further reviewed by the data analysis team in order to 1) ensure that they are understandable and properly translated into English and 2) none of the other specify answers for multiple response questions are in fact synonymous to one of the response items. The revision resulted in a long list of other specify items that were then recoded into one the available response items.
Monitoring fieldwork progress and performance indicators
In addition to the above checks that were specific to each instrument, the survey team monitored the general progress of the fieldwork and specific indicators revealing the performance of teams and enumerators over time. Indicators such as number of control/treatment schools completed, number of teachers/pupils/parent/lesson observations interviews completed, average interviewing time of each instrument, number pupils interviewed instead of their parents for the porverty scorecard instrument, how many teacher interviews were conducted over the phone, etc. These indicators were constructed in a Stata do-file that ran on the latest cleaned dataset and was then uploaded onto a google document sheet that would break down each of the indicators by team, enumerator (where applicable) and week of data collection. This was reviewed regularly by the fieldwork management team, and overall IE project manager, and used to feedback to weaker teams and to improve performance.
Back-checking data
The quality assurance (QA) protocol involved back-checks that were conducted over the phone and in the field. Two members of the fieldwork management team called back interviewed teachers to confirm that the interviews were indeed conducted. Furthermore, a list of questions to be re-asked to teachers were compiled and administered to the teacher over the phone to ensure that the information was properly collected. In addition, the fieldwork management team re-visited 10 schools and 45 households to check whether interviews were administered properly.
Integration of Analysis and Survey Team
Another central element of QA was the strong integration of the fieldwork management team and the members of the quantitative analysis team, including the overall IE project manager. Members of both teams were involved in the fieldwork preparation and implementation, and in the analysis process which followed.
Personnel: Oxford Policy Management's (OPM) Tanzania office conducted the Midline Impact Evaluation survey.
The fieldwork management team comprised seven members (including six OPM staff) led by a quantitative survey project manager who had overall responsibility for the design, implementation, management and quality of the fieldwork. Since all the survey instruments excluding the teacher development needs assessments (TDNAs) were administered using computer assisted personal interviewing (CAPI), the team also included several members with strong computer programming skills in the relevant software (Surveybe). The overall project manager for the IE, who is responsible for the content of the instruments worked closely with the fieldwork team during pre-testing, training, piloting and early fieldwork. 51 enumerators were invited to the training. These were selected based on the following criteria (in order): (i) high performance during the EQUIP-T BL survey (about half of the enumerators from BL also worked in the ML survey); (ii) interviewers with strong track record from other OPM-led surveys; and (iii) new recruits that were interviewed over 2 days and selected based on their prior survey experience and knowledge of education.
Fieldwork preparation: The early fieldwork preparation consisted of pre-testing the instruments and protocols, obtaining permits from the government for visiting schools during the pre-tests, pilot and fieldwork, revising the BL fieldwork manual, and refining the instruments and protocols.
Pre-tests of instruments: See Questionnaires Section below.
Permits and reporting
As part of preliminary preparations for any survey in Tanzania, there are two types of governmental permits that have to be obtained prior the beginning of Research work:
Upon receipt of the permits, the anticipated fieldwork needs to be reported at the regional and district level. Letters introducing the study to local leaders are obtained in the process. For the ML IE survey, the COSTECH research clearance and an introduction letter was received two months prior the start of actual fieldwork. For the Ministry permits, OPM reported to The Prime Minister's Office Regional Administration and Local government (PMORALG) and to the Ministry of Education and Vocational Training (MoEVT). Reporting to MoEVT was relatively fast and simple. The initial submitted letters were followed up in person, and an introduction letter to all 12 Regional Administrative Secretaries was received after 7 days. Getting government approvals from PMORALG proved very time-consuming. The final decision was to shift to a physical reporting approach, as sending letters by courier and follow-up phone calls were unsuccessful. In a combined effort, three of the fieldwork management team members reported in person to all 10 regional and 25 district offices during the enumerator training period. In total 50 person days (including travel days, as distances are vast) had to be allocated to this final reporting task.
Fieldwork manual
Using the baseline fieldwork manual as a basis, an extensive midline fieldworker manual was developed that covered basic guidelines on behaviour and attitude, the use of CAPI and data validation procedures, instructions on fieldwork plans and procedures (sample, targets, replacements, communication, and reporting) as well as a dedicated part on the description of all instruments and protocols. Insights from the pre-test were reflected in the manual. Draft versions of the instrument and protocol sections of the manuals were printed, handed out to interviewers as a reference during the training, and used as guidelines by the trainers. The manual was updated on an ongoing basis during the training and pilot phase where updated conventions or additional clarifications were needed. The final version of the manual was printed at the end of the pilot phase and copies provided to the field teams.
Training and pilot
Enumerator training and a field pilot took place in Dar es Salaam and Dodoma from 29th March to 14th April. A total of 47 enumerator trainees participated in the training. The training was delivered by four members of the fieldwork management team and the overall IE project manager. The main objective of the training was to ensure that team members would be able to master the instruments, understand and correctly implement the fieldwork protocols, comfortably use CAPI, and be able to perform data validation. Supervisors were furthermore trained on their extra responsibilities of data management, fieldwork and financial management, logistical tasks, and the transmission of data files to the data manager.
The training had two components: a classroom-based training component and a field-based component that included a full scale pilot. The performance of enumerators was assessed on an on-going basis, using written assessments and observation of performance in the field and these scores were recorded. At the end of the training and pilot phase, the final fieldwork team was selected using this information.
Fieldwork organisation
The fieldwork plan was designed to cover all 200 schools within all 12 regions and 25 districts for the duration of not more than 7 weeks starting April 15th 2016 to May 27th 2016. Teams communicated regularly with OPM to report delays and/or any event likely to affect the feasibility of the fieldwork plan.
The team composition and fieldwork model at ML were set up differently to baseline to: a) reduce transport costs by reducing car days relative to fieldworker days and moving more travel days to Saturday (schools closed, but working day for fieldworkers), and b) to be able to translate the reduced requirements of instruments in control schools into reduced team size for control teams. At baseline, fieldwork was undertaken by 15 teams of 3 fieldworkers each visiting a school on two consecutive days. At midline, 4 treatment teams of 6 fieldworkers (1 supervisor and 5 enumerators) and 4 control teams of 5 fieldworkers (1 supervisor and 5 enumerators) visited one school on one day. Each team had one supervisor who was responsible for quality-checking the enumerators' work.
The fieldwork started on the 15th of April 2016 and ended on the 27th of May 2016 with no major breaks in-between.
All sampled schools, head teachers, teachers and pupils were uniquely identified by ID codes assigned either before the fieldwork (region, district and school IDs), or at the time of the school visit using automated tables in CAPI (teacher, lesson observation and pupil IDs). The first set of data checking activities included (using Stata):
-Checking of all IDs;
-Checking for missing observations;
-Checking for missing item responses where none should be missing; and
-First round of checks for inadmissible/out of range and inconsistent values.
This resulted in four edited datasets (school/head teacher level, pupil level, teacher level and lesson observation level) sent to the OPM impact evaluation team for further checking and analysis.
The four edited datasets received from the OPM Tanzania survey team were subject to a second set of checking and cleaning activities. This included checking for out of range responses and inadmissible values not captured by the filters built into the CAPI software or the initial data checking process by the survey team. This also involved recoding of non-responses due to the questionnaire design and rules of questionnaire administration for the pupil learning assessment and teacher development needs assessment.
A comprehensive data checking and analysis system was created including a logical folder structure, the development of a detailed data documentation guide and template syntax files (in Stata), to ensure data checking and cleaning activities were recorded, that all analysts used the same file and variable naming conventions, variable definitions, disaggregation variables and weighted estimates appropriately.
Name |
---|
Oxford Policy Management Ltd |
Name | URL | |
---|---|---|
Oxford Policy Management Ltd | http://www.opml.co.uk/ | admin@opml.co.uk |
The datasets have been anonymised and are available as a Public Use Dataset. They are accessible to all for statistical and research purposes only, under the following terms and conditions:
The original collector of the data, Oxford Policy Management Ltd, and the relevant funding agencies bear no responsibility for use of the data or for interpretations or inferences based upon such uses.
Oxford Policy Management. Education Quality Improvement Programme in Tanzania Impact Evaluation Midline Survey 2016, Version 2.2 of the public use dataset (December 2021).
The user of the data acknowledges that the original collector of the data, the authorised distributor of the data, and the relevant funding agency bear no responsibility for use of the data or for interpretations or inferences based upon such uses.
(c) 2021, Oxford Policy Management Ltd.
Name | URL | |
---|---|---|
Oxford Policy Management Ltd | admin@opml.co.uk | http://www.opml.co.uk/ |
DDI_TZA_2016_EQUIPIE-ML_v02_M
Name | Affiliation | Role |
---|---|---|
Harb, Jana | Oxford Policy Management Ltd | Data analyst |
Pettersson Gelander, Gunilla | Birk Consulting | Quantitative education lead |
2021-12-02
Version 2 (December 2021)