PISA 2018 Design

Quantitative Study
  • Overall approach to data collection
    • Proctored assessment and self-administered questionnaire
  • Specification
    • Cross-sectional, every 3 years
Target population

15-year-old students enrolled in education institutions at Grade 7 and higher in their respective countries or economies, including those:

  • Enrolled full-time in educational institutions
  • Enrolled in educational institutions but attending only on a part-time basis
  • Enrolled in vocational training programs, or any other related type of educational program
  • Attending foreign schools within the country/economy (as well as students from other countries/economies attending any of the programs in the first three categories)
Sample design
  • The international PISA target population in each participating country and economy consisted of 15-year-old students attending an educational institution in Grade 7 and higher. 
  • In all but one country (the Russian Federation), the sampling design used for the PISA assessment was a two-stage stratified sample design.


The first-stage sampling units

  • Consisted of individual schools with 15-year-old students or the possibility of having such students at the time of assessment.
  • Schools were sampled systematically from a comprehensive national list of all PISA-eligible schools – the school sampling frame – with probabilities that were proportional to a measure of size.
  • The measure of size was a function of the estimated number of PISA-eligible 15-year-old students enrolled in the school. This is referred to as systematic probability-proportional-to-size (PPS) sampling.
  • Prior to sampling, schools in the sampling frame were assigned to mutually exclusive groups based on school characteristics (explicit strata) specifically chosen to improve the precision of sample-based estimates.


The second-stage sampling units in countries using the two-stage design were students within sampled schools.

  • Once schools were selected to be in the sample, a complete list of each sampled school’s 15-year-old students was prepared.
  • For each country a target cluster size (TCS) was set, typically 42 students for computer-based countries and 35 for paper-based countries, although by agreement countries could use alternative values. The sample size within the schools is prescribed, within limits, in the PISA Technical Standards.
  • From each list of students that contained more than the TCS, a sample of typically 42 students was selected with equal probability.
  • For lists of fewer than the TCS, all students on the list were selected.
Sample size

The student sample size is a minimum of 6,300 assessed students per country/economy. The school sample size needs to result in a minimum of 150 participating schools per country/economy.

Some 600,000 15-year-old students responded to the PISA 2018 assessment in 79 countries/economies.

Data collection techniques and instruments

The assessment

  • Computer-based tests were used, with assessments lasting a total of two hours for each student, in a range of countries and economies.
  • In reading, a multi‑stage adaptive approach was applied in computer-based tests whereby students were assigned a block of test items based on their performance in preceding blocks.
  • Test items were a mixture of multiple-choice questions and questions requiring students to construct their own responses.
  • The items were organized in groups based on a passage setting out a real-life situation.
  • More than 15 hours of test items were covered, with different students taking different combinations of test items.


Background questionnaires

  • Students also answered a background questionnaire
    • Completion time 35 minutes
    • Information about the students themselves, their home and school, and learning experiences
  • School principals completed a questionnaire that covered the school system and the learning environment.
  • In some countries/economies optional questionnaires were distributed to
    • Teachers
    • Parents
  • Countries could choose three other optional questionnaires for students:
    • One asked students about their familiarity with and use of information and communication technologies (ICT)
    • The second sought information about their education to date
    • The third was about student well-being
  • achievement test
  • questionnaire
  • The achievement test was administered in 54 different languages.
  • The most common languages
    • English (21 countries)
    • Spanish (10 countries)
    • Russian (10 countries)
Translation procedures
  • Optimization of the English source version for translation through translatability assessment
  • Development of two source versions of the instruments, in English and French (except for the financial literacy and for the operational manuals, provided only in English)
  • Double translation design
  • Preparation of detailed instructions for the localization of the instruments for the Field Trial and for their review for the Main Survey
  • Preparation of translation/adaptation guidelines
  • Training of national staff in charge of translation/adaptation of the instruments
  • Validation of the translated/adapted national versions, including verification by independent verifiers, review by cApStAn staff and the translation referee or the questionnaires team
Quality control of operations

Measures during data collection

  • Procedures for sampling, translation, survey administration, and data processing were developed in accordance with the PISA Technical Standards and fully documented.
  • The PISA Consortium provided comprehensive manuals explaining the implementation of the survey.
    • They included:
      • Precise instructions for the work of school coordinators
      • Scripts for test administrators to use during the assessment sessions
    • Proposed adaptations to survey procedures or modifications to the assessment session script were submitted to the PISA Consortium for approval prior to verification.
  • Participating countries and economies were required to ensure that test administrators worked with the school coordinator to prepare the assessment session, including:
    • Reviewing and updating the Student Tracking form
    • Completing the Session Attendance form, which is designed to record students’ attendance and instruments allocation
    • Completing the Session Report form, which is designed to summarize session times, any disturbances to the session, etc.
    • Ensuring that the number of test booklets and questionnaires collected from students tallied with the number sent to the school (paper‑based assessment countries) or ensuring that the number of USB sticks used for the assessment were accounted for (computer-based assessment countries)
    • Sending the school questionnaire, student questionnaires, parent and teacher questionnaires (if applicable), and all test materials (both completed and not completed) to the national center after the testing
  • The PISA Consortium responsible for overseeing survey operations implemented all phases of the PISA Quality Monitor (PQM) process:
    • Phases included:
      • Interviewing and hiring PQM candidates in each of the countries
      • Organizing their training
      • Selecting the schools to visit
      • Collecting information from the PQM visits
    • PQMs
      • Independent contractors located in participating countries who were hired by the international survey operations contractor
      • Visited a sample of schools to observe test administration and to record the implementation of the documented field-operations procedures in the main survey
    • Typically, two or three PQMs were hired for each country, and they visited an average of 15 schools in each country. If there were adjudicated regions in a country, it was usually necessary to hire additional PQMs, as a minimum of five schools were observed in adjudicated regions.
  • Quality-assurance data collected throughout the PISA 2015 assessment were entered and collated in a central data adjudication database on the quality of field operations, printing, translation, school and student sampling, and coding.  Comprehensive reports were then generated for the PISA Adjudication Group.


Measures during data processing and cleaning

  • For Computer-Based Assessment (CBA) participants, the coding designs for the CBA responses for mathematics, reading, science, and financial literacy (when applicable) were greatly simplified through use of the Open-Ended Coding System (OECS), which organizes responses according to the agreed-upon coding designs.
  • Through coder reliability monitoring, coding inconsistencies or problems within and across countries could be detected early in the coding process through OECS and Open-Ended Response Scoring (OERS) output reports, allowing action to be taken as soon as possible.
  • The OECS/OERS worked in concert with the DME database to generate two types of reliability reports: a proportion agreement and coding category distributions. Coder reliability studies also made use of the OECS/OERS reports submitted by national centers.
  • Data-quality standards in PISA required minimum participation rates for schools as well as for students. These standards were established to minimize the potential for response bias. In the case of countries meeting these standards, it was likely that any bias resulting from non-response would be negligible, i.e., typically smaller than the sampling error.
  • All quality assurance data collected throughout the cycle are entered and collated in a central data adjudication database. The technical advisory group (TAG) and the sampling referee review this database to make country-by-country evaluations on the quality of field operations, printing, translation, school and student sampling and coding. The final report by TAG experts is then used for the purpose of data adjudication.