IHSN Survey Catalog
  • Home
  • Microdata Catalog
  • Citations
  • Login
    Login
    Home / Central Data Catalog / NGA_2012_ESSPIN-CS1_V01_M
central

Education Sector Support Programme 2012

Nigeria, 2012
Reference ID
NGA_2012_ESSPIN-CS1_v01_M
Producer(s)
Stuart Cameron
Metadata
DDI/XML JSON
Created on
Jun 26, 2017
Last modified
Jun 26, 2017
Page views
19168
  • Study Description
  • Data Dictionary
  • Downloads
  • Get Microdata
  • Identification
  • Version
  • Scope
  • Coverage
  • Producers and sponsors
  • Sampling
  • Survey instrument
  • Data collection
  • Data processing
  • Data appraisal
  • Data Access
  • Disclaimer and copyrights
  • Contacts
  • Identification

    Survey ID number

    NGA_2012_ESSPIN-CS1_v01_M

    Title

    Education Sector Support Programme 2012

    Subtitle

    Composite Survey 1

    Country
    Name Country code
    Nigeria NGA
    Study type

    1-2-3 Survey, phase 1 [hh/123-1]

    Series Information

    The Education Sector Support Programme in Nigeria (ESSPIN) Composite Survey 1 (CS1) is the first round of three ESSPIN composite surveys conducted in 2012.
    The first round of the composite survey was carried out in all six ESSPIN states: Enugu, Jigawa, Kaduna, Kano, Kwara and Lagos.

    Abstract

    In July 2012, representative stratified samples of public primary schools, head teachers, teachers and pupils were surveyed in the six Nigerian states where the DFID/UKaid-funded Education Sector Support Programme in Nigeria works.

    The ESSPIN Composite Survey (CS) process serves two main functions: periodically assessing the effects of ESSPIN's integrated School Improvement Programme (SIP), and reporting on selected indicators of the quality of education in the six ESSPIN-supported states. The CS addresses five Output pillars of the SIP, namely teacher competence, head teacher effectiveness, school development planning, school based management committee functionality and inclusive practices in schools. It also provides estimates of one Outcome indicator-school quality; and one Impact indicator-pupil learning achievement. The CS is wide-ranging but not exhaustive: it complements other ESSPIN/state monitoring and evaluation processes in areas such as institutional development, school enrolments and infrastructure. It brings together into a single exercise baseline surveys that were conducted by ESSPIN in 2010, hence 'composite' survey.

    Four data collection methods were used to complete ten questionnaires: interviews, record schedules, observation and oral/written tests. The total sample covered 595 schools/head teachers/SBMCs, 2,975 teachers and 9,520 pupils. Enumerators drawn from State School Improvement Teams and education officials were trained and then mobilised to collect the data over a six week period, with field supervision by NPC and ESSPIN. Data entry, cleaning and checking took longer than intended due to several technical problems. Each indicator of education quality was underpinned by a variety of objectively observable criteria. Estimates (values drawn from a sample to describe the population as a whole) are shown within 95% confidence intervals. In the case of Kano (and to a lesser extent Kaduna) some values are insufficiently precise to include in programme-wide aggregates. Mean estimates for ESSPIN-supported schools and non-ESSPIN supported schools are compared, and said to be significantly different at the 0.05 level (ie, where there is at least a 95% probability that the values for Phase 1 and Control Schools are actually different from one another). For certain numeracy measures, a comparison of the difference between 2010 and 2012 values for Phase 1 and Control Schools is possible. In most cases, such 'difference in differences' calculations will have to wait until the CS is repeated in 2014 and beyond. Although those CS 2012 results which show a significant difference between Phase 1 and Control Schools cannot necessarily be ascribed to 'the ESSPIN effect' (since other characteristics of schools in those categories could actually determine the difference), in the absence of evidence for an alternative cause it is reasonable to suppose that ESSPIN interventions are having the intended effect. This is particularly true of the Output and Outcome indicators but less likely with respect to Impact (children's learning outcomes) at this stage in the programme. The basis of allocation of schools to Phase 1 in each state is reported, to aid critical consideration of any selection bias.

    Kind of Data

    Sample survey data [ssd]

    Unit of Analysis

    School; Pupil; Teacher

    Version

    Version Description

    v2.1: Edited, anonymous dataset for public distribution.

    Version Date

    2012

    Scope

    Notes

    The scope of the Education Sector Support Programme in Nigeria (ESSPIN) 2012 includes:

    • School background
    • School leadership
    • School governance
    • Teacher
    • Lesson observation
    • Classroom mapping
    • P2 literacy
    • P4 literacy
    • P2 numeracy
    • P4 numeracy
    Topics
    Topic Vocabulary
    Education World Bank
    Primary education World Bank
    Keywords
    Teacher competence Head teacher effectiveness School development planning School inclusiveness Gender School-based management committee Numeracy Literacy Enrolment Teacher attendence Impact Evaluation Education Primary Education Nigeria Structured pedagogy English Mathematics

    Coverage

    Geographic Coverage

    Six Nigerian states - Enugu, Jigawa, Kaduna, Kano, Kwara, and Lagos

    Universe

    Schools in the six ESSPIN states - Enugu, Jigawa, Kaduna, Kano and Lagos

    Producers and sponsors

    Primary investigators
    Name Affiliation
    Stuart Cameron Oxford Policy Management
    Producers
    Name Affiliation
    Ifeatu Nnodu Oxford Policy Management
    Shefali Rai Oxford Policy Management
    Ekundayo Arogundade Oxford Policy Management
    Kelechi Udoh Oxford Policy Management
    Femi Adegoke Oxford Policy Management
    Rachita Daga Oxford Policy Management
    Babatunde Akano Oxford Policy Management
    Reg Allen Oxford Policy Management
    Katharina Keck Oxford Policy Management
    Zara Majeed Oxford Policy Management
    Funding Agency/Sponsor
    Name
    UK Department for International Development

    Sampling

    Sampling Procedure

    This section outlines the sampling strategy and target sample sizes for each unit of observation for the 2012 ESSPIN composite survey conducted in the six focus states: Enugu, Jigawa, Kaduna, Kano, Kwara and Lagos.

    1. Aim of sampling design

    The analysis requires estimation of several indicators for each of the units of observation and where the 2010 MLA data and documentation allow it, attribution of any observed changes in the outputs and outcomes of interest over time to corresponding ESSPIN programme interventions; therefore, the sample of units was selected with rigorous scientific procedures in order that selection probabilities are known.

    In each of the six focus states, the intended sample for the 2012 CS was 105 primary schools, except in Enugu where phase 2 schools had not been identified at the time of the survey and the intended sample was 70 schools. This gives a total sample size of 595 schools. In each school the head teacher (N~595) and five other teachers who had received ESSPIN-sponsored training (N~2,975) and five other teachers who had not received such training (N~2,975) were expected to be interviewed except in cases where a sample school had fewer than five teachers (of either category) in which case all teachers were interviewed. Four primary 2 pupils were to be assessed in literacy and four primary 2 pupils in numeracy in each school, and similarly for primary 4 pupils (N~ 9,520).

    1. Construction of sampling frame

    The school sample frame was constructed using information on school ESSPIN and 2010 MLA survey participation and school size from the Education Management Information System (EMIS). To enable the planned analyses a multi-stage sampling design was used as shown in Figure A.1 in the CS1 report.

    The lines connecting the units of observation in Figure A.1 represent sampling stages. The six survey states were pre-determined as the ESSPIN programme operates in these states. In each focus state, public primary schools were selected (first stage), and then within each sample school, teachers and grade 2 and grade 4 pupils respectively (second stage) were selected. In the first sampling stage, there is stratification in order to allow the observation of a minimum number of units in each stratum of various types of analytical importance such as ESSPIN phase 1, ESSPIN phase 2, and control (no ESSPIN interventions) schools. The total intended sample across the six states was 595 public primary schools

    1. Drawing of the samples for the baseline survey

    Selection of schools
    The major sampling strata (hereafter denoted with the subscript h) are the schools' participation in the ESSPIN programme: ESSPIN phase 1 schools, ESSPIN phase 2 schools, and control (no ESSPIN intervention) schools in each of the six states with the exception of Enugu, where there are no phase 2 schools. Each of the major strata is divided into two sub-strata, respectively composed of the schools selected and not selected for the 2010 MLA survey.

    2010 MLA schools were selected in one of two ways depending on the total number of 2010 MLA schools in the 2010 MLA school sub-strata. If there were more than 17 MLA schools, 17 were selected using systematic equal probability sampling and if there were fewer than 17 MLA schools, all were selected with certainty.

    The reason for using systematic equal probability sampling was that this method was used to select the school sample for the 2010 MLA survey combined with the need for a minimum number of 2010 MLA schools to be contained within the 2012 sample in order to enable analysis over time of any changes in pupil learning as measured by the MLA Selection of teachers

    The head teacher was interviewed in all sample schools.
    Five ESSPIN-trained and five non-ESSPIN-trained teachers were selected in each sample school using simple random sampling. The teacher sampling was conducted in schools by the enumerators who used a special form and random number tables.

    The teacher and pupil sampling was conducted in the field. The sampling selections delegated to the enumerators were conducted as a part of interviewing processes that had broader objectives. For this reason the selection processes were not supported by stand-alone forms but were instead integrated with the survey questionnaires and used as follows for pupils (the same procedure was used for teacher sampling):

    • First, the enumerator used the school's pupil register to write pupil codes next to each pupil name starting with 1 for the first pupil listed up until the last pupil listed, which provided the largest pupil code.
    • Second, the interviewer wrote down the largest pupil code in a box on the questionnaire.
    • Third, the interviewer scanned the provided random number table according to the instructions provided to find the pupil codes of the eligible pupils to be selected.

    Selection of pupils
    Four grade 2 pupils and four grade 4 pupils were selected for each of the literacy and numeracy assessments respectively in each sample school using simple random sampling. The pupil sampling was conducted in schools by the enumerators who used a special form and random number tables similar to the teacher sampling.

    The teacher and pupil sampling was conducted in the field. The sampling selections delegated to the enumerators were conducted as a part of interviewing processes that had broader objectives. For this reason the selection processes were not supported by stand-alone forms but were instead integrated with the survey questionnaires and used as follows for pupils (the same procedure was used for teacher sampling):

    • First, the enumerator used the school's pupil register to write pupil codes next to each pupil name starting with 1 for the first pupil listed up until the last pupil listed, which provided the largest pupil code.
    • Second, the interviewer wrote down the largest pupil code in a box on the questionnaire.
    • Third, the interviewer scanned the provided random number table according to the instructions provided to find the pupil codes of the eligible pupils to be selected.

    Panel component
    CS1 forms the baseline survey with the aim to visit the same schools during future rounds.

    Deviations from the Sample Design

    One data issue to highlight is that the intended stratification of teachers in the phase 1 schools into two groups: teachers trained under the SIP and teachers not-trained under the SIP, was not possible in the field. The majority of teachers were unable to distinguish SIP training from other types of in-service training they have received during the same period. ESSPIN deliberately sought to ensure that the SIP training is part of the state in-service training delivery system, and thus did not attempt to tag it to the 'ESSPIN or SIP brand'. This appears to have been a successful strategy and again is characteristic of surveys conducted in applied programme intervention mode rather than pure research mode. In interpreting the findings on teacher competence in the phase 1 schools later in the report, it is important to bear in mind that only a maximum of six teachers in each school participated in SIP training directly. For small schools this would have been all teachers, whilst for large schools it was a very small percentage.

    Response Rate

    The actual sample interviewed was inevitably lower than intended; this occurs in almost all sample
    surveys for a variety of reasons. A comparison of the intended and actual number of records obtained for all six states combined is in Table 3. The highlighted column shows that 99% of schools and 96% of pupils were sampled as intended, which is a good response rate. The comparable figure falls to 72% for teachers, but this is largely explained by the fact that many schools had fewer than 10 teachers on the staff and so fewer teachers were interviewed by necessity7. The third and fourth columns provide information on one aspect of the quality of the records: missing data. All records could not be used in the analysis because of missing data, and these columns show the highest and lowest number of records that were actually used to generate key estimates. It is clear that missing data is a particular problem for school records (questionnaire 2 and 3). For at least one school-level key estimate, 17% of records had no data.

    Table 3 in the related CS3 report includes the sample units selected and actually interviewed for all 6 states.

    Weighting

    The sampling weights for schools, teachers and pupils are the inverse of their respective selection probabilities.

    School selection probability

    The probability of selecting school i in stratum h is given by:
    phj=nh/Nh for 2010 MLA schools
    phj=[(Nh-mh)/Nh)][(35-nh)thi/Th] for non-2010 MLA schools

    where
    mh is the number of public primary schools that participated in the 2010 MLA survey in the stratum;
    nh is the number of public primary schools that participated in the 2010 MLA survey in the stratum selected by the ESSPIN project;
    Nh is the total number of public primary schools in the state as reported in the school sample frame in the stratum;
    tih is the number of teachers in each school as per the sample frame; and
    Th is the total number of teachers, as per the sample frame, in all public primary schools in the state that did not participate in the 2010 MLA survey in the stratum.

    Teacher selection probability
    The probability of choosing a (ESSPIN-trained or non-ESSPIN trained respectively) teacher j in school i of stratum h is given by:
    phij=phi*(5/thi)

    where
    phi is the probability of selecting the school (given by Equation [1]); and
    thi is the number of eligible teachers in the school at the time it was visited for the ESSPIN survey.

    Pupil selection probability
    The probability of choosing a (grade 2 or grade 4 respectively) pupil k in school i of stratum h is given by:

    phik=phi*(4/shi)

    where
    phi is the probability of selecting the school (given by Equation [1]); and
    shi is the number eligible pupils in the school at the time it was visited for the ESSPIN survey.

    Survey instrument

    Questionnaires

    The 2012 CS used ten different questionnaires. These were of four types:

    • Interview: oral questions to individual respondents. For example head teachers were asked about their lesson observation practices. Often the questions require the respondent to produce written evidence of action.
    • Record schedule: for collecting information from written records. This was used to collect information on primary 2 and 4 enrolment, and teacher numbers, which the data collectors used to draw the pupil teacher samples.
    • Observation: for recording information on activities taking place during lessons.
    • Test: written/oral questions given to pupils on English literacy and numeracy

    Questionnaires used in the 2012 Composite Survey

    Questionnaire- Respondent - Type of instrument

    1. School background - Head teacher- Record schedule (and sampling)
    2. School leadership- Head teacher- Interview
    3. School governance- SBMC chair & secretary- Interview
    4. Teacher- Teacher- Interview
    5. Lesson observation- Teacher- Observation
    6. Classroom mapping- Teacher- Observation
    7. P2 literacy- p2 pupil- Test
    8. P4 literacy- p4 pupil- Test
    9. P2 numeracy- p2 pupil- Test
    10. P4 numeracy- p4 pupil- Test

    Data collection

    Dates of Data Collection
    Start End
    2012-06-15 2012-08-25
    Data Collectors
    Name
    Oxford Policy Management, Nigeria Office
    Supervision

    The senior member of each field team was responsible for data quality during the collection process. They checked questionnaires and ensured that sampled units were interviewed. In addition, ten roving quality control officers (hired from the National Population Commission (NPC)), checked up on the work of the field teams by verifying sampling procedures and checking the accuracy of data collection. These officers filled in separate quality control questionnaires. Members of ESSPIN's state and Abuja teams carried out spot checks, monitored fieldwork progress and provided on-the-spot guidance throughout the period.

    Data Collection Notes
    1. Pre-test

    2. Team Composition
      A total of 25 field teams undertook the survey in each state, each with two members. The data collectors were either members of the State School Improvement Teams (SSITs), School Support Officers (SSOs) or Social Mobilization Officers (SMOs), and so were experienced in observing classroom practice and other aspects of school management and governance. To try to ensure objectivity, the data collectors were not deployed to their current assigned areas and districts. Participation in the survey had significant professional development benefits for the data collectors with respect to their school support roles. This represents a lasting benefit in comparison to the alternative of buying data collection services from an external supplier.

    The senior member of each field team was responsible for data quality during the collection process. They checked questionnaires and ensured that sampled units were interviewed. In addition, ten roving quality control officers (hired from the National Population Commission (NPC)), checked up on the work of the field teams by verifying sampling procedures and checking the accuracy of data collection. These officers filled in separate quality control questionnaires. Members of ESSPIN's state and Abuja teams carried out spot checks, monitored fieldwork progress and provided on-the-spot guidance throughout the period.

    1. Training
      Data collectors were trained using a two-step layered approach. Senior data collectors from each state were trained over a two week period in May 2012, using copies of the questionnaires and classroom and field-based practice. This group then trained the remaining data collection team during an intensive eight day period in early June

    2. Language
      Interviews were conducted in local languages (usually Hausa, Igbo, Yoruba).

    3. Timeline
      Fieldwork took approximately six weeks to complete, from mid-June to late-July 2012.

    4. Notable events
      During this period there was serious civil unrest in throughout the north of Nigeria, which disrupted the field work in Kaduna and Kano in particular. The possibility that this traumatic environment affected some of the results in these states cannot be ruled out.

    Data processing

    Data Editing

    Data cleaning and analysis were conducted from June 2016 through November 2016 by a small team based in the OPM offices in Oxford and South Africa.

    Data were entered in Microsoft Access, and a detailed checking and cleaning process was undertaken by a team of experienced survey analysts in liaison with the NPC who were responsible for data entry. The process of data checking and cleaning took much longer than expected for a number of reasons, including: (i) data entry was slower than expected, and a considerable number of questionnaires had to be re-entered; (ii) some of the identifier codes for the various units sampled, particularly teachers, were not completed correctly in the field, and had to be rectified manually; (iii) the design of the questionnaires did not include some standard features, including skip codes when questions were not applicable; the problems arising from this had to be fixed manually8. These issues, and others, will be documented as part of a review process to ensure that improvements are made for the 2014 CS round.

    All statistical analyses were performed with Stata, using its 'svy' facilities for survey data analysis to account for the sampling design.

    Data appraisal

    Estimates of Sampling Error

    Estimates derived from samples are characterised by sampling errors. In other words, the fact that we do not obtain the information that we want from the entire population but from a random subset, means that statistical measures of interest, such as the mean, are not calculated with perfect precision but are likely to fall within a range of values called a confidence interval. In the related report, mean estimates for logframe indicators are presented graphically with their 95% confidence interval9. Estimates presented for teachers and pupils in each state are based on well over 100 observations (and commonly many more), while estimates for school-level indicators are based on more than 30 observations. Annex D of the report contains mean estimates, standard errors (used to compute confidence intervals), and sample sizes, for all of the logframe indicators.

    Extreme caution is needed in interpreting estimates which have very large confidence intervals. The estimates of the pupil learning indicators in Kano, and in some cases Kaduna, display large confidence intervals and are very imprecise. This is caused by an unexpectedly large 'design effect'. The design effect is the loss of effectiveness due to the use of cluster sampling rather than simple random sampling. In the case of the CS, primary pupils were not selected randomly from a list of all pupils in each state, because no such list exists. Instead, they were selected from within the sampled schools. In simple terms, selecting an additional unit from the same cluster (in this case, the same school) adds less information than a completely independent unrelated selection would. If intracluster correlation for the statistic in question is high then this drives up the design effect. This could happen if pupils within the sampled schools gave similar (or the same) answers. The design effect reduces the effective sample size by the 'DEFF factor'. In general for a well-designed study design effects would be less than 3. For some of the mean estimates of pupil learning for Kano, the DEFF is more than 50. For this reason, apart from presenting the logframe indicators for information, Kano has been excluded from the analysis of pupil learning in this report. It would require somewhat lengthy further analysis to determine what the cause of the high design effect in Kano was. ESSPIN will report the results of any such further investigations in due course, if the capacity to undertake them can be identified-not least to avoid a repetition of this problem in future rounds of the survey.

    Data Access

    Access authority
    Name URL Email
    Oxford Policy Management http://www.opml.co.uk admin@opml.co.uk
    Access conditions

    The data files have been anonymised and are available as a Public Use Dataset. They are accessible to all for statistical and research purposes only, under the following terms and conditions:

    1. The data and other materials will not be redistributed or sold to other individuals, institutions, or organisations without the written agreement of Oxford Policy Management Ltd.
    2. The data will be used for statistical and scientific research purposes only. They will be used solely for reporting of aggregated information, and not for investigation of specific individuals or organisations.
    3. No attempt will be made to re-identify respondents, and no use will be made of the identity of any person or establishment discovered inadvertently. Any such discovery would immediately be reported to Oxford Policy Management Ltd.
    4. No attempt will be made to produce links among datasets provided by Oxford Policy Management Ltd, or among data from Oxford Policy Management Ltd and other datasets that could identify individuals or organisations.
    5. Any books, articles, conference papers, theses, dissertations, reports, or other publications that employ data obtained from Oxford Policy Management Ltd will cite the source of data in accordance with the Citation Requirement provided with each dataset.
    6. An electronic copy of all reports and publications based on the requested data will be sent to Oxford Policy Management Ltd.

    The original collector of the data, Oxford Policy Management Ltd, and the relevant funding agencies bear no responsibility for use of the data or for interpretations or inferences based upon such uses.

    Citation requirements

    Oxford Policy Management. Education Sector Support Programme in Nigeria (ESSPIN) Composite Survey 1, Version 2.1 of the public use dataset (2012).

    Disclaimer and copyrights

    Disclaimer

    The user of the data acknowledges that the original collector of the data, the authorised distributor of the data, and the relevant funding agency bear no responsibility for use of the data or for interpretations or inferences based upon such uses.

    Copyright

    (c) 2012, Oxford Policy Management Ltd

    Contacts

    Contacts
    Name Affiliation Email
    Stuart Cameron Survey Project Manager stuart.cameron@opml.co.uk
    Back to Catalog
    IHSN Survey Catalog

    © IHSN Survey Catalog, All Rights Reserved.