ICILS 2018 OUTCOME MEASURES
ICILS 2018
- Computer literacy
- Information literacy
- Computational thinking
ICILS Teacher Panel 2020
No (student) assessment
Scale Creation
Computer and information literacy scale
The ICILS CIL reporting scale was established for ICILS 2013, with a mean of 500 (the average CIL scale score across countries in 2013) and a standard deviation of 100 for the equally weighted national samples.
In 2018, three of the five test modules were the same as those used in ICILS 2013, and two modules were new for ICILS 2018.
Prior to scaling of the 2018 data, an extensive analysis of scaling properties was carried out that included reviews of missing values, test coverage, assessment of item fit, differential item functioning by gender, and cross-national measurement equivalence.
The ICILS test items were scaled using item response modeling with the (one-parameter) Rasch model.
- The CIL scale was derived from student responses to the 81 test questions and large tasks (which corresponded to a total of 102 score points).
- Most questions and tasks corresponded to a single item each; however, each ICILS large task was scored against a set of criteria (each criterion with its own unique set of scores) relating to the properties of the task. Each large-task assessment criterion was, therefore, also an item in ICILS.
Item parameters for CIL were obtained from a joint data file that included response data from both ICILS 2013 and ICILS 2018. This joint calibration methodology is also applied in IEA TIMSS, PIRLS, and ICCS studies.
Additionally, plausible values as ability estimates were generated with full conditioning in order to take all student-level and between-school differences into account.
Countries were equally weighted within each ICILS cycle for the CIL calibration, and all items were included (except for items that were deleted nationally or internationally following the adjudication process).
Four proficiency levels were established for ICILS 2013 and, following the equating procedure, student achievement in ICILS 2018 could be reported against the same described proficiency levels. The descriptions of the levels were updated to accommodate the new test item material developed for ICILS 2018.
Computational thinking scale
As for the CIL scale, prior to scaling the CT data, extensive analysis of scaling properties was carried out that included reviews of missing values, test coverage, assessment of item fit, differential item functioning by gender, and cross-national measurement equivalence.
The CT test items were scaled using item response modeling with the (one-parameter) Rasch model.
- The CT scale was derived from student responses to the 18 discrete tasks and questions (which corresponded to a total of 39 score points).
- Most questions and tasks corresponded to a single item each; however, the visual coding tasks were scored according to both the correctness of the solution (i.e., the degree to which the executed code completed the specified actions) and the efficiency of the code solution (measured by the number of code blocks used in a solution).
The final CT reporting scale was set to a metric with a mean of 500 (the ICILS average CT score) and a standard deviation of 100 for the equally weighted national samples.
Three proficiency regions were established for CT, thereby providing test item locations on the CT achievement scale and allowing a description of these regions complete with example test items.
List of Achievement Scales
The computer and information literacy scale
The computational thinking scale
Scale Creation
Two general types of indices could be distinguished, both of which derived from the ICILS questionnaires:
Simple indices
- They were constructed through arithmetical transformation or simple recoding.
- For example, ratios between ICT and students or an index of immigration background based on information about the country of birth of students and their parents.
Scale indices
- They were derived from the scaling of items, a process typically achieved by using item response modeling of dichotomous or Likert-type items.
- Item response modeling (applying the Rasch partial credit model) provided an adequate tool for deriving 10 international student questionnaire scales, nine teacher questionnaire scales, and seven school questionnaire scales.
- A composite index reflecting socioeconomic background was derived using principal component analysis of three home background indicators, namely, parental occupation, parental education, and home literacy resources.
- Generally, the scales used in ICILS had sound psychometric properties, such as high reliability.
- Confirmatory factor analyses showed satisfactory model fit for the measurement models underpinning the scaling of the questionnaire data.
Only scale indices are reported below.
List of Background Scales
Student questionnaire
Students’ general engagement with ICT
- Frequency of use of general ICT applications
- Frequency of use of specialist ICT applications
- Frequency of use of ICT for social communication
- Frequency of use of ICT for exchanging information
- Frequency of use of ICT for accessing content from the internet
Student engagement with ICT for school-related purposes
- Frequency of use of ICT for study purposes
- Frequency of use of general applications in class
- Frequency of use of specialist applications in class
Extent of student learning about ICT at school
- Extent to which students learned about CIL tasks at school
- Extent to which students learned about CT tasks at school
Students’ ICT self-efficacy
- ICT self-efficacy regarding the use of general applications
- ICT self-efficacy regarding the use of specialist applications
- Attitudes to ICT futures
Students’ perceptions of ICT
- Perceptions of positive effects of ICT on society
- Perceptions of negative effects of ICT on society
- Perceptions of personal futures with ICT
Teacher questionnaire
Teachers’ ICT self-efficacy
Teachers’ emphasis on developing ICT skills and coding skills
- Teachers’ emphasis on developing ICT capabilities in class
- Teacher emphasis of teaching CT-related tasks
Teachers’ use of ICT for class activities
Teachers’ use of ICT for teaching practices
Teachers’ use of ICT tools in class
- Teachers' use of digital learning tools
- Teachers' use of general utility software
Teachers’ perceptions of ICT resources and teacher collaboration
- Teachers' perceptions of the availability of computer resources at school
- Teachers' perceptions of the collaboration between teachers when using ICT
Teachers’ reports on ICT-related professional learning
- Teacher participation in structured learning professional development related to ICT
- Teacher participation in reciprocal learning professional development related to ICT
Teachers’ perceptions of positive outcomes of using ICT for teaching and learning
Teachers’ perceptions of negative outcomes of using ICT for teaching and learning
School questionnaires
Principals’ use of ICT
- Principals’ use of ICT for general school-related activities
- Principals’ use of ICT for school-related communication activities
Principals’ views on using ICT
Principals’ reports on expected ICT knowledge and skills of teachers
- Principals' reports on expectations of ICT use by teachers
- Principals' reports on expectations for teacher collaboration using ICT
Principals’ reports on priorities for ICT use at schools
- Principals’ views of priorities for facilitating use of ICT - hardware
- Principals’ views of priorities for facilitating use of ICT - support
ICT coordinators’ reports on the availability of digital resources at school
ICT coordinators’ reports on hindrances to the use of ICT for teaching and learning at school
- ICT coordinators reports on computer resource hindrances
- ICT coordinators reports on pedagogical resource hindrances