{"type":"survey","doc_desc":{"title":"WLD_2015_PIRLS_v01_M","idno":"DDI_WLD_2015_PIRLS_v02_M_WB","producers":[{"name":"Development Economics Data Group","abbreviation":"DECDG","affiliation":"The World Bank","role":"Documentation of the DDI"}],"prod_date":"2018-09-18","version_statement":{"version":"DDI Document  - Version 02 - (04\/21\/21)\n This version is identical to DDI_WLD_2015_PIRLS_v01_M_WB but country field has been updated to capture all the countries covered by survey.\n\n Version 1.0 (September 2018)"}},"study_desc":{"title_statement":{"idno":"WLD_2015_PIRLS_v01_M","title":"Progress in International Reading and Literacy Study 2016","alt_title":"PIRLS 2016"},"authoring_entity":[{"name":"International Association for Educational Attainment","affiliation":""},{"name":"International Study Centre","affiliation":"Boston College"}],"production_statement":{"funding_agencies":[{"name":"National Centre for Education Statistics of the U.S. Department of Education","abbreviation":"NCES","role":""},{"name":"UK\u2019s National Foundation for Educational Research","abbreviation":"","role":""}]},"distribution_statement":{"contact":[{"name":"TIMSS & PIRLS International Study Center","affiliation":"","email":"timssandpirls@bc.edu","uri":"https:\/\/timssandpirls.bc.edu\/isc\/contact.html"}]},"series_statement":{"series_name":"Socio-Economic\/Monitoring Survey [hh\/sems]","series_info":"PIRLS (Progress in International Reading Literacy Study) was inaugurated in 2001 as a follow-up to IEA\u2019s 1991 Reading Literacy Study. Conducted every five years, PIRLS assesses the reading achievement of young students in their fourth year of schooling-an important transition point in their development as readers. Typically, by this time in their schooling, students have learned how to read and are now reading to learn. PIRLS is designed to complement IEA\u2019s TIMSS assessment of mathematics and science at the fourth grade. \n\nPIRLS 2016 is the fourth assessment in the current trend series, following PIRLS 2001, 2006, and 2011. There were 61 participants in PIRLS 2016, including 50 countries and 11 benchmarking entities (e.g., regions of countries as well as additional grades or language groups from the participating countries) that were assessed to provide comparative data to inform policy. For countries that have participated in a previous assessment since 2001, the PIRLS 2016 results provide an opportunity to evaluate progress in reading achievement across four time points: 2001, 2006, 2011, and 2016."},"version_statement":{"version":"- v01"},"study_info":{"abstract":"PIRLS provides internationally comparative data on how well children read by assessing students\u2019 reading achievement at the end of grade four. PIRLS 2016 is the fourth cycle of the study and collects considerable background information on how education systems provide educational opportunities to their students, as well as the factors that influence how students use this opportunity. In 2016 PIRLS was extended to include ePIRLS \u2013 an innovative assessment of online reading.\n\nThe results of PIRLS 2016 demonstrate a number of positive developments in reading literacy worldwide. For the first time in the history of the study, as many as 96 percent of fourth graders from over 60 education systems achieved above the PIRLS low international benchmark.","coll_dates":[{"start":"2015-12","end":"2016-03","cycle":"Southern Hemisphere"},{"start":"2016-06","end":"2016-09","cycle":"Northern Hemisphere"}],"nation":[{"name":"United Arab Emirates","abbreviation":"ARE"},{"name":"Argentina","abbreviation":"ARG"},{"name":"Australia","abbreviation":"AUS"},{"name":"Austria","abbreviation":"AUT"},{"name":"Azerbaijan","abbreviation":"AZE"},{"name":"Belgium","abbreviation":"BEL"},{"name":"Bulgaria","abbreviation":"BGR"},{"name":"Bahrain","abbreviation":"BHR"},{"name":"Canada","abbreviation":"CAN"},{"name":"Chile","abbreviation":"CHL"},{"name":"Czech Republic","abbreviation":"CZE"},{"name":"Germany","abbreviation":"DEU"},{"name":"Denmark","abbreviation":"DNK"},{"name":"Spain","abbreviation":"ESP"},{"name":"Finland","abbreviation":"FIN"},{"name":"France","abbreviation":"FRA"},{"name":"United Kingdom","abbreviation":"GBR"},{"name":"Georgia","abbreviation":"GEO"},{"name":"Hungary","abbreviation":"HUN"},{"name":"Ireland","abbreviation":"IRL"},{"name":"Iran, Islamic Rep.","abbreviation":"IRN"},{"name":"Israel","abbreviation":"ISR"},{"name":"Italy","abbreviation":"ITA"},{"name":"Kazakhstan","abbreviation":"KAZ"},{"name":"Kuwait","abbreviation":"KWT"},{"name":"Lithuania","abbreviation":"LTU"},{"name":"Latvia","abbreviation":"LVA"},{"name":"Morocco","abbreviation":"MAR"},{"name":"Malta","abbreviation":"MLT"},{"name":"Netherlands","abbreviation":"NLD"},{"name":"Norway","abbreviation":"NOR"},{"name":"New Zealand","abbreviation":"NZL"},{"name":"Oman","abbreviation":"OMN"},{"name":"Poland","abbreviation":"POL"},{"name":"Portugal","abbreviation":"PRT"},{"name":"Qatar","abbreviation":"QAT"},{"name":"Russian Federation","abbreviation":"RUS"},{"name":"Saudi Arabia","abbreviation":"SAU"},{"name":"Singapore","abbreviation":"SGP"},{"name":"Slovak Republic","abbreviation":"SVK"},{"name":"Slovenia","abbreviation":"SVN"},{"name":"Sweden","abbreviation":"SWE"},{"name":"Trinidad and Tobago","abbreviation":"TTO"},{"name":"Taiwan, China","abbreviation":"TWN"},{"name":"United States","abbreviation":"USA"},{"name":"South Africa","abbreviation":"ZAF"}],"geog_coverage":"Nationally representative samples of approximately 4,000 students from 150 to 200 schools participated in PIRLS 2016. About 319,000 students, 310,000 parents, 16,000 teachers, and 12,000 schools participated in total.","analysis_unit":"The unit of analysis describes:\n\n- Schools\n\n-  Students\n\n-  Parents\n\n-  Teachers","universe":"All students enrolled in the grade that represents four years of schooling counting from the first year of ISCED Level 1, providing the mean age at the time of testing is at least 9.5 years.\n\nAll students enrolled in the target grade, regardless of their age, belong to the international target population and should be eligible to participate in PIRLS. Because students are sampled in two stages, first by randomly selecting a school and then randomly selecting a class from within the school, it is necessary to identify all schools in which eligible students are enrolled. Essentially, eligible schools for PIRLS are those that have any students enrolled in the target grade, regardless of type of school.","data_kind":"Sample survey data [ssd]","notes":"The PIRLS 2016 contains information on the following:\n\n-  Student achievement\n\n-  Teacher background\n\n-  Student background\n\n-  School background\n\n-  Parent background"},"method":{"data_collection":{"sampling_procedure":"PIRLS is designed to provide valid and reliable measurement of trends in student achievement in countries around the world, while keeping to a minimum the burden on schools, teachers, and students. The PIRLS program employs rigorous school and classroom sampling techniques so that achievement in the student population as a whole may be estimated accurately by assessing just a sample of students from a sample of schools. PIRLS assesses reading achievement at fourth grade. The PIRLS 2016 cycle also included PIRLS Literacy-a new, less difficult reading literacy assessment, and ePIRLS-an extension of PIRLS with a focus on online informational reading.\n\nPIRLS employs a two-stage random sample design, with a sample of schools drawn as a first stage and one or more intact classes of students selected from each of the sampled schools as a second stage. Intact classes of students are sampled rather than individuals from across the grade level or of a certain age because PIRLS pays particular attention to students\u2019 curricular and instructional experiences, and these typically are organized on a classroom basis. Sampling intact classes also has the operational advantage of less disruption to the school\u2019s day-to-day business than individual student sampling.\n\nSAMPLE SIZE\n\nFor most countries, the PIRLS precision requirements are met with a school sample of 150 schools and a student sample of 4,000 students for each target grade. Depending on the average class size in the country, one class from each sampled school may be sufficient to achieve the desired student sample size. For example, if the average class size in a country were 27 students, a single class from each of 150 schools would provide a sample of 4,050 students (assuming full participation by schools and students). Some countries choose to sample more than one class per school, either to increase the size of the student sample or to provide a better estimate of school level effects.\n\nFor countries choosing to participate in both PIRLS and PIRLS Literacy, the required student sample size is doubled-i.e., around 8,000 sampled students. Countries could choose to select more schools or more classes within sampled schools to achieve the required sample size. Because ePIRLS is designed to be administered to students also taking PIRLS, the PIRLS sample size requirement remains the same for countries choosing also to participate in ePIRLS.  \n\nPIRLS STRATIFIED TWO-STAGE CLUSTER SAMPLE DESIGN\n\nThe basic international sample design for PIRLS is a stratified two-stage cluster sample design, as follows:\n\n-  First Sampling Stage. For the first sampling stage, schools are sampled with probabilities proportional to their size (PPS) from the list of all schools in the population that contain eligible students. The schools in this list (or sampling frame) may be stratified (sorted) according to important demographic variables. Schools for the field test and data collection are sampled simultaneously using a systematic random sampling approach. Two replacement schools are also pre-assigned to each sampled school during the sample selection process, and these replacement schools are held in reserve in case the originally sampled school refuses to participate. Replacement schools are used solely to compensate for sample size losses in the event that the originally sampled school does not participate. School sampling is conducted for each country by Statistics Canada with assistance from IEA Hamburg, using the sampling frame provided by the country\u2019s National Research Coordinator.\n\n-  Second Sampling Stage. The second sampling stage consists of the selection of one (or more) intact class from the target grade of each participating school. Class sampling in each country is conducted by the National Research Coordinator using the Within-School Sampling Software (WinW3S) developed by IEA Hamburg and Statistics Canada. Having secured a sampled school\u2019s agreement to participate in the assessment, the National Research Coordinator requests information about the number of classes and teachers in the school and enters it in the WinW3S database. \n\nClasses smaller than a specified minimum size are grouped into pseudo-classes prior to sampling. The software selects classes with equal probabilities within schools. All students in each sampled class participate in the assessment. Sampled classes that refuse to participate may not be replaced.\n\nFor countries participating in both PIRLS and PIRLS Literacy, students within a sampled class are randomly assigned either a PIRLS or PIRLS Literacy booklet through a booklet rotation system. This is done to ensure that PIRLS and PIRLS Literacy are administered to probabilistically equivalent samples. In countries taking part in ePIRLS, all students assessed in PIRLS are expected to participate in ePIRLS.\n\nSTRATIFICATION\n\nStratification consists of arranging the schools in the target population into groups, or strata, that share common characteristics such as geographic region or school type. Examples of stratification variables used in PIRLS include region of the country (e.g., states or provinces); school type or source of funding (e.g., public or private); language of instruction; level of urbanization (e.g., urban or rural area); socioeconomic indicators; and school performance on national examinations.\n\nIn PIRLS, stratification is used to:\n\n-  Improve the efficiency of the sample design, thereby making survey estimates more reliable\n\n-  Apply different sample designs, such as disproportionate sample allocations, to specific groups of schools (e.g., those in certain states or provinces)\n\n-  Ensure proportional representation of specific groups of schools in the sample School stratification can take two forms: explicit and implicit. In explicit stratification, a separate school list or sampling frame is constructed for each stratum and a sample of schools is drawn from that stratum. In PIRLS, the major reason for considering explicit stratification is disproportionate allocation of the school sample across strata. For example, in order to produce equally reliable estimates for each geographic region in a country, explicit stratification by region may be used to ensure the same number of schools in the sample for each region, regardless of the relative population size of the regions.\n\nImplicit stratification consists of sorting the schools by one or more stratification variables within each explicit stratum, or within the entire sampling frame if explicit stratification is not used. The combined use of implicit strata and systematic sampling is a very simple and effective way of ensuring a proportional sample allocation of students across all implicit strata. Implicit stratification also can lead to improved reliability of achievement estimates when the implicit stratification variables are correlated with student achievement.\n\nNational Research Coordinators consult with Statistics Canada and IEA Hamburg to identify the stratification variables to be included in their sampling plans. The school sampling frame is sorted by the stratification variables prior to sampling schools so that adjacent schools are as similar as possible. Regardless of any other explicit or implicit variables that may be used, the school size is always included as an implicit stratification variable.\n\nSCHOOL SAMPLING FRAME\n\nOne of the National Research Coordinator\u2019s most important sampling tasks is the construction of a school sampling frame for the target population. The sampling frame is a list of all schools in the country that have students enrolled in the target grade and is the list from which the school sample is drawn. A well-constructed sampling frame provides complete coverage of the national target population without being contaminated by incorrect or duplicate entries or entries that refer to elements that are not part of the defined target population.\n\nA suitable school measure of size (MOS) is a critical aspect of the national sampling plan, because the size of a school determines its probability of selection. The most appropriate school measure of size is an up-to-date count of the number of students in the target grade. If the number of students in the target grade is not available, total student enrollment in the school may be the best available substitute.","coll_mode":"Face-to-face [f2f]","research_instrument":"The PIRLS 2016 questionnaire results provide a wealth of information about the home, school, and classroom contexts in which students learn to read.","weight":"National student samples in PIRLS are designed to accurately represent the target population within a specified margin of sampling error. After the data have been collected and processed, sample statistics such as means and percentages that describe student characteristics are computed as weighted estimates of the corresponding population parameters, where the weighting factor is the sampling weight. A student\u2019s sampling weight is essentially the inverse of the student\u2019s probability of selection, with appropriate adjustments for nonresponse. In principle, the stratified two-stage sampling procedure used in PIRLS, where schools are sampled with probability proportional to school size and classes are sampled with probability inversely proportional to school size, provides student samples with equal selection probabilities. However, in practice disproportionate sampling across explicit strata by varying the number of classes selected and differential patterns of nonresponse can result in varying selection probabilities, requiring a unique sampling weight for the students in each participating class in the study.\n\nThe student sampling weight in PIRLS is a combination of weighting components reflecting selection probabilities and sampling outcomes at three levels-school, class, and student. At each level, the weighting component consists of a basic weight that is the inverse of the probability of selection at that level, together with an adjustment for nonparticipation. The overall sampling weight for each student is the product of the three weighting components: school, class (within school), and student (within class).","method_notes":"DATA VERIFICATION AT THE NATIONAL CENTERS\n\nBefore sending the data to IEA Hamburg for further processing, national centers carried out mandatory validation and verification steps on all entered data and undertook corrections as necessary.\n\nWhile the questionnaire data were being entered, the data manager or other staff at each national center used the information from the Tracking Forms to verify the completeness of the materials. Student participation information (e.g., whether a student participated in the assessment or was absent) was entered or imported into WinW3S.\n\nThe validation process was supported by an option in WinW3S to generate an inconsistency report. This report listed all of the types of discrepancies between variables recorded during the within-school sampling and test administration processes and made it possible to cross-check these data against data entered in the DME, the database for online respondents, and the uploaded student data on the central international server. When inconsistencies were identified, data managers were instructed to resolve the issue before final data submission to IEA Hamburg. If inconsistencies remained or the national center could not solve them, IEA Hamburg asked the center to provide documentation on these problems.\n\nUpon submitting the validated data to IEA Hamburg, NRCs also provided extensive documentation including hard copies or electronic scans of all original Student and Teacher Tracking Forms, Student Listing Forms, and when applicable, a report on procedural activities collected as part of the online Survey Activities Questionnaire.\n\nCLEANING THE INTERNATIONAL AND NATIONAL DATABASES\n\nTo ensure the integrity of the international database, a uniform data cleaning process was followed, involving regular consultation between IEA Hamburg and the NRCs. After each country had submitted its data, codebooks, and documentation, IEA Hamburg, in collaboration with the NRCs, conducted a four-step cleaning procedure on the submitted data and documentation:\n\n1. A structural check\n\n2. A check of the identification (ID) variables\n\n3. Linkage cleaning\n\n4. Background cleaning\n\nData cleaning was an iterative process, with numerous iterations of the four-step cleaning procedure being implemented for each national data set. This repetition ensured that all data were properly cleaned and that any new errors that could have been introduced during the data cleaning were rectified. The cleaning process was repeated as many times as necessary until all data were made consistent and comparable. Any inconsistencies detected during the cleaning process were resolved in collaboration with national centers, and all corrections made during the cleaning process were documented in a cleaning report, produced for each country.\n\nAfter the final cleaning iteration, each country\u2019s data were sent to Statistics Canada for the calculation of sampling weights, and then the data, including sampling weights, were sent to the TIMSS & PIRLS International Study Center so that scaling could be performed. The NRCs were provided with interim data products to review at two different points in the process."},"analysis_info":{"response_rate":"For a full table of school participation rates, which vary by country, please see Appendix C on page 311 of the PIRLS 2016 Report (attached as a Related Material).","sampling_error_estimates":"Because PIRLS is fundamentally a study of student achievement, the precision of estimates of student achievement is of primary importance. To meet the PIRLS standards for sampling precision, national student samples should provide for a standard error no greater than .035 standard deviation units for the country\u2019s mean achievement. This standard error corresponds to a 95% confidence interval of \u00b1 7 score points for the achievement mean and of \u00b1 10 score points for the difference between achievement means from successive cycles (e.g., the difference between a country\u2019s achievement mean on PIRLS 2011 and PIRLS 2016)1. Sample estimates of any student level percentage estimate (e.g., a student background characteristic) should have a confidence interval of \u00b1 3.5%."}},"data_access":{"dataset_availability":{"access_place":"International Association for the Evaluation of Educational Achievement","access_place_uri":"https:\/\/timssandpirls.bc.edu\/pirls2016\/international-database\/index.html","original_archive":"International Association for the Evaluation of Educational Achievement\nhttps:\/\/timssandpirls.bc.edu\/pirls2016\/international-database\/index.html\nCost: None"},"dataset_use":{"contact":[{"name":"International Association for the Evaluation of Educational Achievement","affiliation":"","email":"","uri":"https:\/\/timssandpirls.bc.edu\/pirls2016\/international-database\/index.html"}],"cit_req":"Use of the dataset must be acknowledged using a citation which would include:\n\n- the Identification of the Primary Investigator\n\n- the title of the survey (including country, acronym and year of implementation)\n\n- the survey reference number\n\n- the source and date of download\n\nExample,\n\nInternational Association for the Evaluation of Educational Achievement. World- Progress in International Reading and Literacy Study (PIRLS) 2016, Ref. WLD_2015_PIRLS_v01_M. Dataset downloaded from [url] on [date].","disclaimer":"The user of the data acknowledges that the original collector of the data, the authorized distributor of the data, and the relevant funding agency bear no responsibility for use of the data or for interpretations or inferences based upon such uses."}}},"schematype":"survey","data_files":[],"variables":[],"variable_groups":[]}