PISA 2015 Design

Methodology
Quantitative Study
Method(s)
  • Overall approach to data collection
    • Proctored assessment and self-administered questionnaire
  • Specification
    • Cross-sectional, every 3 years
Target population

15-year-old students enrolled in an education institution at Grade 7 or higher in their respective countries and economies, including those:

  • Enrolled full-time in an educational institution
  • Enrolled in educational institutions but attending only on a part-time basis
  • Enrolled in a vocational training program, or any other related type of educational program
  • Attending a foreign school within the country (as well as students from other countries attending any of the programs in the first three categories)
Sample design
  • The international PISA target population in each participating country and economy consisted of 15-year-old students attending an educational institution in Grade 7 and higher. 
  • In all but one country (the Russian Federation) the sampling design used for the PISA assessment was a two-stage stratified sample design.

 

The first-stage sampling units

  • Consisted of individual schools with 15-year-old students or the possibility of having such students at the time of assessment.
  • Schools were sampled systematically from a comprehensive national list of all PISA-eligible schools – the school sampling frame – with probabilities that were proportional to a measure of size.
  • The measure of size was a function of the estimated number of PISA-eligible 15-year-old students enrolled in the school. This is referred to as systematic probability proportional to size (PPS) sampling.
  • Prior to sampling, schools in the sampling frame were assigned to mutually exclusive groups based on school characteristics (explicit strata) specifically chosen to improve the precision of sample-based estimates.

 

The second-stage sampling units in countries using the two-stage design were students within sampled schools.

  • Once schools were selected to be in the sample, a complete list of each sampled school’s 15-year-old students was prepared.
  • For each country a target cluster size (TCS) was set, typically 42 students for computer-based countries and 35 for paper-based countries, although by agreement countries could use alternative values. The sample size within the schools is prescribed, within limits, in the PISA Technical Standards.
  • From each list of students that contained more than the TCS, a sample of typically 42 students was selected with equal probability.
  • For lists of fewer than the TCS, all students on the list were selected.
Sample size

Per country/economy

Random sample of a minimum of 150 schools 

 

In total

Approximately 540,000 students 

Data collection techniques and instruments

The assessment

  • Computer-based tests were used, with assessments lasting a total of 2 hours for each student, in a range of countries and economies.
  • Test items were a mixture of multiple-choice questions and questions requiring students to construct their own responses.
  • The items were organized in groups based on a passage setting out a real-life situation.
  • About 390 minutes of test items were covered, with different students taking different combinations of test items.

 

Background questionnaires

  • Students also answered a background questionnaire:
    • For providing information about the students themselves, their homes, and the school and learning experiences.
    • It took 35 minutes to complete.
  • School principals completed a questionnaire that covered the school system and the learning environment.
  • In some countries and economies, optional questionnaires were administered:
    • To teachers
    • To parents, who were asked to provide information:
      • On their perceptions of and involvement in their child’s school
      • Their support for learning in the home
      • Their child’s career expectations, particularly in science.
  • Countries could choose two other optional questionnaires for students:
    • One asked students about their familiarity with and use of information and communication technology (ICT)
    • The second sought information about their education to date, including any interruptions in their schooling, and whether and how they were preparing for a future career.
Techniques
  • achievement test
  • questionnaire
Languages
  • The achievement test was administered in 48 different languages.
  • The most common languages:
    • English (19 countries)
    • Spanish  (12 countries)
    • Russian (7 countries)
Translation procedures
  • Optimization of the English-language source version for translation through translatability assessment
  • Development of two source versions of the instruments, one in English and one in French (except for the financial literacy and for the operational manuals, which were provided only in English)
  • Double translation design
  • Preparation of detailed instructions for the localization of the instruments for the field trial and for their review for the main survey
  • Preparation of translation and adaptation guidelines
  • Training of national staff in charge of the translation and/or adaptation of the instruments
  • Validation of the translated and adapted national versions: verification by independent verifiers, and review by the PISA Consortium and the translation referee or the questionnaires team
Quality control of operations

Measures during data collection

  • Procedures for sampling, translation, survey administration, and data processing were developed in accordance with the PISA Technical Standards and fully documented.
  • The PISA Consortium provided comprehensive manuals that explained the implementation of the survey:
    • They included:
      • Precise instructions for the work of school coordinators
      • Scripts for test administrators to use during the assessment sessions.
    • Proposed adaptations to survey procedures or modifications to the assessment session script were submitted to the PISA Consortium for approval prior to verification.
  • Participating countries and economies were required to ensure that test administrators worked with the school coordinator to prepare the assessment session, including
    • Reviewing and updating the Student Tracking form
    • Completing the Session Attendance form, which is designed to record students’ attendance and instruments allocation
    • Completing the Session Report form, which is designed to summarize session times, any disturbance to the session, etc.
    • Ensuring that the number of test booklets and questionnaires collected from students tallied with the number sent to the school (paper‑based assessment countries) or ensuring that the number of USB sticks used for the assessment were accounted for (computer-based assessment countries)
    • Sending the school questionnaire, student questionnaires, parent and teacher questionnaires (if applicable), and all test materials (both completed and not completed) to the national center after the testing.
  • The PISA Consortium responsible for overseeing survey operations implemented all phases of the PISA Quality Monitor (PQM) process:
    • Phases included:
      • Interviewing and hiring PQM candidates in each of the countries
      • Organizing their training
      • Selecting the schools to visit
      • Collecting information from the PQM visits.
    • PQMs
      • Are independent contractors located in participating countries who were hired by the international survey operations contractor.
      • They visited a sample of schools to observe test administration and to record the implementation of the documented field-operations procedures in the main survey.
    • Typically, two or three PQMs were hired for each country, and they visited an average of 15 schools in each country. If there were adjudicated regions in a country, it was usually necessary to hire additional PQMs, as a minimum of five schools were observed in adjudicated regions.
  • All quality-assurance data collected throughout the PISA 2015 assessment were entered and collated in a central data adjudication database on the quality of field operations, printing, translation, school and student sampling, and coding.  Comprehensive reports were then generated for the PISA Adjudication Group.

 

Measures during data processing and cleaning

  • For Computer Based Assessment (CBA) participants, the coding designs for the CBA responses for mathematics, reading, science, and financial literacy (when applicable) were greatly simplified through use of the Open-Ended Coding System (OECS), which allowed responses to be organized according to the agreed-upon coding designs.
  • Through coder-reliability monitoring, coding inconsistencies or problems within and across countries could be detected early in the coding process through OECS and Open-Ended Response Scoring (OERS) output reports, allowing action to be taken as soon as possible.
  • The OECS/OERS worked in concert with the DME database to generate two types of reliability reports: a proportion agreement and coding category distributions. Coder reliability studies also made use of the OECS/OERS reports submitted by national centers.
  • Data-quality standards in PISA required minimum participation rates for schools as well as for students. These standards were established to minimize the potential for response biases. In the case of countries meeting these standards, it was likely that any bias resulting from non-response would be negligible, i.e., typically smaller than the sampling error.
  • All quality assurance data collected throughout the cycle were entered and compiled in a central data adjudication database. The Technical Advisory Group (TAG) and the sampling referee reviewed this database to make country-by-country evaluations on the quality of field operations, printing, translation, school and student sampling, and coding. The final report by TAG experts was then used for the purpose of data adjudication.