NGA_2010_NEDS_v01_M
DHS EdData Survey 2010
Name | Country code |
---|---|
Nigeria | NGA |
Demographic and Health Survey, Special [hh/dhs-sp]
The 2010 Nigeria Education Data Survey (2010 NEDS) is important in several respects. The survey, which was conducted in collaboration with the Federal Ministry of Education (FMOE) and the Universal Basic Education Commission (UBEC), is the second of its kind conducted with the aim of obtaining household information on children’s education.The first one was conducted in 2004.The 2010 NEDS was linked to the 2008 Nigeria Demographic and Health Survey (Nigeria DHS).
The 2010 NEDS is similar to the 2004 Nigeria DHS EdData Survey (NDES) in that it was designed to provide information on education for children age 4–16, focusing on factors influencing household decisions about children’s schooling. The survey gathers information on adult educational attainment, children’s characteristics and rates of school attendance, absenteeism among primary school pupils and secondary school students, household expenditures on schooling and other contributions to schooling, and parents’/guardians’ perceptions of schooling, among other topics.The 2010 NEDS was linked to the 2008 Nigeria Demographic and Health Survey (NDHS) in order to collect additional education data on a subset of the households (those with children age 2–14) surveyed in the 2008 Nigeria DHS survey. The 2008 NDHS, for which data collection was carried out from June to October 2008, was the fourth DHS conducted in Nigeria (previous surveys were implemented in 1990, 1999, and 2003).
The goal of the 2010 NEDS was to follow up with a subset of approximately 30,000 households from the 2008 NDHS survey. However, the 2008 NDHS sample shows that of the 34,070 households interviewed, only 20,823 had eligible children age 2–14. To make statistically significant observations at the State level, 1,700 children per State and the Federal Capital Territory (FCT) were needed. It was estimated that an additional 7,300 households would be required to meet the total number of eligible children needed. To bring the sample size up to the required target, additional households were screened and added to the overall sample. However, these households did not have the NDHS questionnaire administered. Thus, the two surveys were statistically linked to create some data used to produce the results presented in this report, but for some households, data were imputed or not included.
Sample survey data [ssd]
Households
Individuals
The survey covers topics such as
National
Name |
---|
National Population Commission |
Name |
---|
Federal Ministry of Education |
Universal Basic Education Commission |
Name | Role |
---|---|
United States Agency for International Development | Funding |
UK Department for International Development | Funding |
Name | Role |
---|---|
RTI International | Technical Assistance |
The eligible households for the 2010 NEDS are the same as those households in the 2008 NDHS sample for which interviews were completed and in which there is at least one child age 2-14, inclusive. In the 2008 NDHS, 34,070 households were successfully interviewed, and the goal here was to perform a follow-up NEDS on a subset of approximately 30,000 households. However, records from the 2008 NDHS sample showed that only 20,823 had children age 4-16. Therefore, to bring the sample size up to the required number of children, additional households were screened from the NDHS clusters.
The first step was to use the NDHS data to determine eligibility based on the presence of a child age 2-14. Second, based on a series of precision and power calculations, RTI determined that the final sample size should yield approximately 790 households per State to allow statistical significance for reporting at the State level, resulting in a total completed sample size of 790 × 37 = 29,230. This calculation was driven by desired estimates of precision, analytic goals, and available resources. To achieve the target number of households with completed interviews, we increased the final number of desired interviews to accommodate expected attrition factors such as unlocatable addresses, eligibility issues, and non-response or refusal. Third, to reach the target sample size, we selected additional samples from households that had been listed by NDHS but had not been sampled and visited for interviews. The final number of households with completed interviews was 26,934 slightly lower than the original target, but sufficient to yield interview data for 71,567 children, well above the targeted number of 1,700 children per State.
A very high overall response rate of 97.9 percent was achieved with interviews completed in 26,934 households out of a total of 27,512 occupied households from the original sample of 28,624 households. The response rates did not vary significantly by urban–rural (98.5 percent versus 97.6 percent, respectively). The response rates for parent/guardians and children were even higher, and the rate for independent children was slightly lower than the overall sample rate, 97.4 percent. In all these cases, the urban/rural differences were negligible.
The NEDS 2010 analysis weights were created from the original sampling weights of the 2008 Nigeria Demographic and Health Survey (NDHS). The weights were adjusted to account for the new sampled households, scaled by the population of children in a five-year age category by State, then rescaled back to sample size.
The NEDS 2010 sample took all the households in the 2008 NDHS who had eligible children between the ages of 4 and 16 years old in 2010, thus the NDHS weights acted as the basis for the NEDS 2010 weights. At the cluster level, the NDHS weights were adjusted by multiplying them by the number of households found in both the 2008 NDHS and 2010 NEDS studies and then dividing by the sum of the number of households found in the 2008 NDHS and 2010 NEDS studies and the newly sampled NEDS households. To obtain the population weights, the adjusted weights mentioned above were scaled to the population by age and State. At the State level, the adjusted weights were multiplied by the population of eligible children found in five-year age categories, then divided by the sum of the adjusted weights. In accordance with replicating the NEDS 2004 tables, the population weights were rescaled to the number of sampled eligible children in the NEDS 2010 study. At the national level, the population weights were divided by the sum of the population weights and then multiplied by the total number of eligible children sampled in the NEDS 2010.
The four questionnaires used in the 2004 Nigeria DHS EdData Survey (NDES)—
More than 90 percent of the questionnaires remained the same; for cases where there was a clear justification or a need for a change in item formulation or a specific requirement for additional items, these were updated accordingly. A one day workshop was convened with the NEDS Implementation Team and the NDES Advisory Committee to review the instruments and identify any needed revisions, additions, or deletions. Efforts were made to collect data to ease integration of the 2010 NEDS data into the FMOE’s national education management information system. Instrument issues that were identified as being problematic in the 2004 NDES as well as items identified as potentially confusing or difficult were proposed for revision. Issues that USAID, DFID, FMOE, and other stakeholders identified as being essential but not included in the 2004 NDES questionnaires were proposed for incorporation into the 2010 NEDS instruments, with USAID serving as
the final arbiter regarding questionnaire revisions and content.
General revisions accepted into the questionnaires included the following:
Upon completion of revisions to the English-language questionnaires, the instruments were translated and adapted by local translators into three languages—Hausa, Igbo, and Yoruba—and then back-translated into English to ensure accuracy of the translation. After the questionnaires were finalized, training materials used in the 2004 NDES and developed by Macro International, which included training guides, data collection manuals, and field observation materials, were reviewed. The materials were updated to reflect changes in the questionnaires. In addition, the procedures as described in the manuals and guides were carefully reviewed. Adjustments were made, where needed, based on experience on large-scale survey and lessons learned from the 2004 NDES and the 2008 NDHS, to ensure the highest quality data capture.
Start | End |
---|---|
2009 | 2010 |
Pre-test classroom training, held in September 2010, included introduction and study overview, general interviewing techniques, reviewing the four questionnaire types, anthropometry measurements and literacy test, questionnaire certifications exams, and administrative procedures.The pre-test training served as a train-the-trainers session for the coordinators who would conduct the larger full-scale training session. Data collection manuals were distributed to field staff about two weeks before training for review. Constructive feedback regarding interviewing techniques was provided to training participants throughout these exercises, which allowed the interviewers ample opportunity to address identified issues and learn proper interviewing, questionnaire marking, and storage techniques. After classroom training, practice interviews were conducted in surrounding areas over a seven-day period, after which revisions of the instruments, procedures, and training were done in accordance with lessons learned from the pre-test.
Training
For the full-scale training, held in March 2010, approximately 300 staff that included interviewers, field supervisors, field editors, and quality control interviewers were trained. The 2010 NEDS interviewers composed a subset of 2008 NDHS interviewers. NPC coordinators conducted the two-week classroom training for the full-scale survey with RTI staff on site to provide technical assistance as needed. The training also included practice interviews in neighborhoods in and around Keffi, using the questionnaire in English and the three local languages. Certification exercises were used to assess interviewers and ensure that they acquired the skills needed to correctly carry out their field duties.
After classroom training, teams were grouped into the three major Nigerian languages and English to conduct practice interviews using the language questionnaires. In addition, field supervisors, editors, and quality control (QC) interviewers received additional training to review proper auditing and field supervision techniques.
Data Collection
Through its previous experience with field surveys such as NDHS, NDES, and the Nigerian National Census, NPC has developed a field team structure that maximizes data quality. This same data collection team structure was used for the 2010 NEDS. Specifically, field interviewers were organized into survey teams, one for each of the 36 States, plus one for Abuja. NPC coordinated and supervised field operations for all 37 teams, each comprising 3 field interviewers, 1 field supervisor, 1 field editor, and 1 driver. In addition to the survey team, each State was assigned 1 QC interviewer. The QC interviewers, however, did not travel with the survey teams. Instead, they trailed the State teams to revisit and re-administer the full questionnaire during the first 2 weeks of data collection and for two weeks of every month of data collection thereafter. This was done in approximately 10 percent of all completed households. Field editors (1 per team) traveled with the survey team and edited all questionnaires in the field to ensure they were correct and complete. Field editors also observed field interviews where possible to ensure that the proper study protocols were followed. Field supervisors made team arrangements and sample assignments. Supervisors were responsible for the quality of the work carried out by the team, ensuring that interviewers followed administration protocols and controlling sample implementation.
Coordinators/trainers who conducted the training for the full-scale survey also oversaw field operations of the field activities in their two assigned States. They also monitored field activities in their States and were responsible for providing NPC’s NEDS Project Director with feedback and updates on field team activities.
After the data were keyed, coordinators reviewed data frequencies and tables to identify any data inconsistencies and errors. Coordinators periodically visited teams in the field to provide feedback and retraining as needed. To ensure a high level of quality and compliance with study protocols, RTI staff also conducted field observation visits. During these visits, RTI staff handled field operational problems and proposed solutions, providing feedback and encouragement to the interviewers.
Data processing for the 2010 NEDS occurred concurrently with data collection. Completed questionnaires were retrieved by the field coordinators/trainers and delivered to NPC in standard envelops, labeled with the sample identification, team, and State name. The shipment also contained a written summary of any issues detected during the data collection process. The questionnaire administrators logged the receipt of the questionnaires, acknowledged the list of issues, and acted upon them if required. The editors performed an initial check on the questionnaires, performed any coding of open-ended questions (with possible assistance from the data entry operators), and left them available to be assigned to the data entry operators. The data entry operators entered the data into the system, with the support of the editors for erroneous or unclear data.
Experienced data entry personnel were recruited from those who have performed data entry activities for NPC on previous studies. The data entry teams composed a data entry coordinator, supervisor and operators. Data entry coordinators oversaw the entire data entry process from programming and training to final data cleaning, made assignments, tracked progress, and ensured the quality and timeliness of the data entry process. Data entry supervisors were on hand at all times to ensure that proper procedures were followed and to help editors resolve any uncovered inconsistencies. The supervisors controlled incoming questionnaires, assigned batches of questionnaires to the data entry operators, and managed their progress. Approximately 30 clerks were recruited and trained as data entry operators to enter all completed questionnaires and to perform the secondary entry for data verification. Editors worked with the data entry operators to review information flagged as “erroneous” or “dubious” in the data entry process and provided follow up and resolution for those anomalies.
The data entry program developed for the 2004 NDES was revised to reflect the revisions in the 2010 NEDS questionnaire. The electronic data entry and reporting system ensured internal consistency and inconsistency checks.
Estimates derived from a sample survey are affected by two types of errors: (1) non-sampling errors and (2) sampling errors. Non-sampling errors are the results of mistakes made in implementing data collection and data processing, such as failure to locate and interview the correct household, misunderstanding of the questions on the part of either the interviewer or the respondent, and data entry errors. Although numerous efforts were made during the implementation of the 2008 NDHS and 2010 NEDS to minimize these types of errors, non-sampling errors are impossible to avoid and difficult to evaluate statistically. Sampling errors, on the other hand, can be evaluated statistically. The sample of respondents selected in the 2010 NEDS is only one of many samples that could have been selected from the same population, using the same design and expected size. Each of these samples would yield results that differ somewhat from the results of the actual sample selected. Sampling errors are a measure of the variability between all possible samples. Although the degree of variability is not known exactly, it can be estimated from the survey results.
A sampling error is usually measured in terms of the standard error for a particular statistic (mean, percentage, etc.), which is the square root of the variance. The standard error can be used to calculate confidence intervals within which the true value for the population can reasonably be assumed to fall. For example, for any given statistic calculated from a sample survey, the value of that statistic will fall within a range of plus or minus two times the standard error of that statistic in 95 percent of all possible samples of identical size and design. If the sample of respondents had been selected as a simple random sample, it would have been possible to
use straightforward formulas for calculating sampling errors. However, the 2008 NDHS/2010 NEDS sample is the result of a multi-stage stratified design, and, consequently, it was necessary to use a more complex formula. The computer software used to calculate sampling errors for these data uses the Taylor linearization method of variance estimation for survey estimates that are means or proportions. The Taylor linearization method treats any percentage or average as a ratio estimate, r = y/x, where y represents the total sample value for variable y, and x represents the total number of cases in the group or subgroup under consideration.
In addition to the standard error, the design effect (DEFT) for each estimate is also calculated. The design effect is defined as the ratio between the standard error using the given sample design and the standard error that would result if a simple random sample had been used. A DEFT value of 1.0 indicates that the sample design is as efficient as a simple random sample, while a value greater than 1.0 indicates the increase in the sampling error due to the use of a more complex and less statistically efficient design. Relative errors and confidence limits for the estimates are also computed.
Sampling errors for the 2010 NEDS are calculated for a few selected variables considered to be of primary interest. Table B.1 in the survey report (available under External Resources) presents the value of the statistic (R), its standard error (SE), the number of unweighted (N) and weighted (WN) cases, the design effect (DEFT), the relative standard error (SE/R), and the 95 percent confidence limits (R ± 2SE) for the selected variables, including fertility and mortality rates.
Name | URL | |
---|---|---|
National Population Commission of Nigeria | http://www.population.gov.ng | info@populationgov.ng |
Use of the dataset must be acknowledged using a citation which would include:
The user of the data acknowledges that the original collector of the data, the authorized distributor of the data, and the relevant funding agency bear no responsibility for use of the data or for interpretations or inferences based upon such uses.
Name | Affiliation | URL | |
---|---|---|---|
National Population Commission of Nigeria | info@populationgov.ng | http://www.population.gov.ng | |
World Bank Development Data Group | World Bank | microdatalib@worldbank.org |
DDI_NGA_2010_NEDS_v01_M_WB
Name | Affiliation | Role |
---|---|---|
Development Economics Data Group | The World Bank | Documentation of the DDI |
2013-07-18
Version 01 (July 2013)