ICILS 2013 Design

Quantitative Study
  • Overall approach to data collection
    • Proctored assessment and self-administered surveys
Target population


  • All students enrolled in the grade that represented eight years of schooling, counting from the first year of ISCED Level 1, providing that the mean age at the time of testing was at least 13.5 years.



  • School staff members who provided student instruction through the delivery of lessons to students, either working with students as an entire class in a classroom, in small groups in resource rooms, or one-on-one inside or outside of the classroom.
  • All teachers that fulfilled the following conditions: Teaching regular school subjects to students of the target grade (regardless of the subject or the number of hours taught) during the ICILS testing period and since the beginning of the school year.



  • A school consisted of a whole unit with a defined number of teachers and students, which could include different programs or tracks.
  • The definition of “school” was to be based on the environment that is shared by students, which is usually a shared faculty, set of buildings, and social space, and also often includes a shared administration and charter.
  • Schools eligible for ICILS were those at which target grade students are enrolled.
Sample design

The ICILS sample design was referred to as a complex design because it involved multistage sampling, stratification, and cluster sampling.

A stratified two-stage probability cluster sampling design was used in order to conduct the school sample selection for all ICILS countries.


First stage 

  • Schools were selected systematically with probabilities proportional to their size (PPS) as measured by the total number of enrolled target-grade students.


Second stage 

  • A systematic simple random sampling approach was used to select (a) students enrolled in the target grade within participating schools and (b) teachers teaching the target grade within participating schools.
Sample size

Per country

  • School sample: 150–200 schools (minimum sample = 150)
  • Student sample: 20 per school (minimum)
  • Teacher sample: 15 per school (minimum)



  • Schools, principals, and ICT coordinators: around 3,300
  • Teachers: around 35,000
  • Students: around 60,000
Data collection techniques and instruments

Student CIL assessment

  • Students completed a computer-based test of CIL.
  • The test consisted of questions and tasks presented in four 30-minute modules.
  • Each student completed two modules randomly allocated from the set of four so that the total assessment time for each student was one hour.


Student questionnaire

  • After completing the two test modules, students answered (again, on computer) a 30-minute international student questionnaire.
  • It included questions relating to:
    • Students’ background characteristics;
    • Their experience and use of computers and ICT to complete a range of different tasks in school and out of school;
    • Their attitudes toward using computers and ICT.


Questionnaires for principals, teachers, and ICT coordinators

The following three instruments could be completed on computer (over the Internet) or on paper:

  • A 30-minute teacher questionnaire that asked teachers
    • Several basic background questions;
    • Questions relating to their
      • Use of ICT in teaching,
      • Attitudes about the use of ICT in teaching;
      • Participation in professional learning activities relating to the pedagogical use of ICT.
  • A 10-minute ICT coordinator questionnaire that
    • Asked ICT coordinators about the resources available in the school to support the use of ICT in teaching and learning.
    • Addressed both technological (e.g., infrastructure, hardware, and software) as well as pedagogical support (such as through professional learning).
  • A 10-minute principal questionnaire that asked school principals to provide information about:
    • School characteristics;
    • School approaches to providing CIL-related teaching and incorporating ICT in teaching and learning.


National contexts survey

  • Completion of the national contexts survey happened online.
  • It was the responsibility of staff in the national centers.
  • The survey contained general information about key antecedents and processes in relation to CIL education at the country level.
  • achievement or student test
  • questionnaire

Languages used

  • Administration of assessment instruments and questionnaires in 22 different languages
  • The most common languages
    • English (3 countries)
    • Spanish (2 countries)
    • German (2 countries)
    • French (2 countries)
Translation procedures
  • Development of an international version of all assessment instruments in English by the ICILS International Study Center
  • Translation into applicable languages of instruction by participating entities
  • Translation verification by linguistic and assessment experts in order to ensure equivalence with the international version
Quality control of operations

Measures during data collection

  • Participants were responsible for data collection within their own respective territories.
  • Standardized survey operation procedures: step-by-step documentation of all operational activities provided with the operation manuals.
  • Full-scale field test of all instruments and operational procedures (in each participating country and entity)
  • Provision of software tools for supporting activities (e.g., sampling and tracking classes and students, administering school and teacher questionnaires, documenting scoring reliability, creating and checking data files)
  • Training, e.g., for national research coordinators (NRCs) and their staff, for school coordinators, and test administrators
  • School visits conducted by international quality control monitors (IQCMs) during test administration (at 15 schools per grade and country)
  • National quality control program
  • Survey activities questionnaire (SAQ) completed by NRCs


Measures during data processing and cleaning

  • Testing of all data cleaning programs with simulated data sets
  • Material Receipt Database
  • National Adaptation Database
  • Standardized cleaning process
  • Repetition of data cleaning and comparison of the new data sets with the preceding version
  • Finally, identification of irregularities in data patterns and correction