This is a Rmd-template for protocols and reporting of systematic reviews and meta-analyses. It synthesizes three sources of standards:

The template is aimed at

  • guiding the process of planning the systematic review/ meta-analysis
  • providing a form for preregistration (enter your text, export as standalone html, upload as preregistration)

We are aware that MARS targets aspects of reporting after the systemtic review/ meta-analysis is completed rather than decisions and reasoning in the planning phase as PRISMA-P and PROSPERO. MARS nevertheless provides a good framework to determine crucial points for systemtic reviews/ meta-analyses to be addressed as early as in the planning phase.

Standards have been partially adapted. Click ‘show changes’ to see changes and reasons for change.

standard implemented change reason
MARS Left out paper section “Abstract” An abstract is important for reporting, not however, for planning and registering.
MARS Left out paper section “Results” and parts of “Discussion” Specifications on how to report results is important for reporting, not however, for planning and registering. Prospective information on how results will be computed/ synthesized is preserved.
MARS Left out “Protocol: List where the full protocol can be found” This form practically is the protocol.
PROSPERO Left out non-mandatory fields or integrated them with mandatory fields. Avoiding too detailed specifications. All relevant informations will be integrated.
PROSPERO Left out some options in “Type and method of review” Options left out are purely health/ medicine related.
PROSPERO Left out “Health area of the review” This field is purely health/ medicine related.




1 General

1.1 Working Title

source description
PRISMA-P Identify the report as a protocol of a systematic review. If the protocol is for an update of a previous systematic review, identify as such
PROSPERO

Give the working title of the review, for example the one used for obtaining funding. Ideally the title should state succinctly the interventions or exposures being reviewed and the associated health or social problems. Where appropriate, the title should use the PI(E)COS structure to contain information on the Participants, Intervention (or Exposure) and Comparison groups, the Outcomes to be measured and Study designs to be included.

For reviews in languages other than English, this field should be used to enter the title in the language of the review. This will be displayed together with the English language title.
MARS Title: State the research question and type of research synthesis (e.g., narrative synthesis, meta-analysis)

Another jingle-jangle fallacy? Examining the validity of Technological Pedagogical and Content Knowledge (TPACK) self-report assessments

1.2 Type of Review

source description
PRISMA-P Not specified.
PROSPERO

Type and method of review: Select the type of review and the review method from the lists below. Select the health area(s) of interest for your review.

  • Meta-analysis
  • Narrative synthesis
  • Network meta-analysis
  • Review of reviews
  • Synthesis of qualitative studies
  • Systematic review
  • Other
MARS Not specified.

Meta-analysis

1.4 Anticipated start and completion date

source description
PRISMA-P Not specified.
PROSPERO Give the date when the systematic review commenced, or is expected to commence. Give the date by which the review is expected to be completed.
MARS Not specified.

10/2020 – 06/2021

1.5 Stage of Review

source description
PRISMA-P Not specified.
PROSPERO

Indicate the stage of progress of the review by ticking the relevant Started and Completed boxes. Additional information may be added in the free text box provided. Please note: Reviews that have progressed beyond the point of completing data extraction at the time of initial registration are not eligible for inclusion in PROSPERO. Should evidence of incorrect status and/or completion date being supplied at the time of submission come to light, the content of the PROSPERO record will be removed leaving only the title and named contact details and a statement that inaccuracies in the stage of the review date had been identified. This field should be updated when any amendments are made to a published record and on completion and publication of the review. If this field was pre-populated from the initial screening questions then you are not able to edit it until the record is published.

  • The review has not yet started: [yes/no]
Review stage Started Completed
Preliminary searches Yes/No Yes/No
Piloting of the study selection process Yes/No Yes/No
Formal screening of search results against eligibility criteria Yes/No Yes/No
Data extraction Yes/No Yes/No
Risk of bias (quality) assessment Yes/No Yes/No
Data analysis Yes/No Yes/No
Provide any other relevant information about the stage of the review here (e.g. Funded proposal, protocol not yet finalised).
MARS Not specified.
Review stage Started Completed
Preliminary searches Yes Yes
Piloting of the study selection process Yes No
Formal screening of search results against eligibility criteria No No
Data extraction No No
Risk of bias (quality) assessment No No
Data analysis No No

1.6 Names, Affiliations, Contact

source description
PRISMA-P
  • Provide name, institutional affiliation, e-mail address of all protocol authors; provide physical mailing address of corresponding author.
  • Describe contributions of protocol authors and identify the guarantor of the review.
PROSPERO
  • Named Contact: The named contact acts as the guarantor for the accuracy of the information presented in the register record.
  • Named contact email: Give the electronic mail address of the named contact.
  • Organisational affiliation of the review: Full title of the organisational affiliations for this review and website address if available. This field may be completed as ‘None’ if the review is not affiliated to any organisation.
  • Review team members and their organisational affiliations: Give the personal details and the organisational affiliations of each member of the review team. Affiliation refers to groups or organisations to which review team members belong.
  • MARS Not specified.

    Corresponding author:
    Iris Backfisch
    University of Tuebingen
    https://uni-tuebingen.de/de/169665
    Hausserstrasse 43, 72076 Tuebingen

    1.7 Collaborators

    source description
    PRISMA-P Not specified.
    PROSPERO Collaborators (name & affilitation) of individuals working on the review, but are not review team member.
    MARS Not specified.

    NA

    1.8 Amendments to previous versions

    source description
    PRISMA-P If the protocol represents an amendment of a previously completed or published protocol, identify as such and list changes; otherwise, state plan for documenting important protocol amendments.
    PROSPERO Not specified.
    MARS Not specified.

    NA

    1.9 Funding sources, sponsors and their roles

    source description
    PRISMA-P
    • Indicate sources of financial or other support for the review
    • Provide name for the review funder and/or sponsor
    • Describe roles of funder(s), sponsor(s), and/or institution(s), if any, in developing the protocol
    PROSPERO Funding sources/sponsors: Give details of the individuals, organizations, groups or other legal entities who take responsibility for initiating, managing, sponsoring and/or financing the review. Include any unique identification numbers assigned to the review by the individuals or bodies listed. Grant numbers.
    MARS
    • List all sources of monetary and in-kind funding support
  • State the role of funders in conducting the synthesis and deciding to publish the results, if any
  • This project is part of the “Qualitätsoffensive Lehrerbildung”, a joint initiative of the Federal Government and the Länder which aims to improve the quality of teacher training. The programme is funded by the Federal Ministry of Education and Research. The authors are responsible for the content of this publication.

    1.10 Conflict of Interest

    source description
    PRISMA-P Not specified.
    PROSPERO List any conditions that could lead to actual or perceived undue influence on judgements concerning the main topic investigated in the review.
    MARS Describe possible conflicts of interest, including financial and other nonfinancial interests.

    No

    2 Introduction

    2.1 Rationale

    source description
    PRISMA-P Describe the rationale for the review in the context of what is already known.
    PROSPERO Not specified.
    MARS

    Problem: State the question or relation(s) under investigation, including

    • Historical background, including previous syntheses and meta-analyses related to the topic
    • Theoretical, policy, and/or practical issues related to the question or relation(s) of interest
    • Populations and settings to which the question or relation(s) is relevant
    • Rationale for
      1. choice of study designs,
      2. the selection and coding of outcomes,
      3. the selection and coding potential moderators or mediators of results
    • Psychometric characteristics of outcome measures and other variables

    Recent research provided evidence that teachers’ professional knowledge regarding the adoption of educational technologies is a central determinant for successful teaching with technologies (Petko, 2012). One prominent conceptualization of teachers’ professional knowledge for teaching with technology is the technological-pedagogical-content-knowledge (TPACK) framework established by Mishra and Koehler (2006). Based on this framework, TPACK encompasses three generic knowledge components (technological knowledge TK, pedagogical knowledge PK, content knowledge CK), three intersections of these knowledge components (technological-pedagogical knowledge TPK, technological-content knowledge TCK, pedagogical-content knowledge PCK) and TPACK as an integrated knowledge component “that represents a class of knowledge that is central to teachers’ work with technology” (p. 1028, Mishra & Koehler, 2006), see Figure 1.


    Figure 1. TPACK Model (Mishra & Koehler, 2006; © 2012 by tpack.org)

    The most prominent questionnaire to assess TPACK is the self-report questionnaire by Schmidt et al. (2009). This questionnaire encompasses items on the different knowledge components which ask teachers to self-evaluate their confidence to fulfill a task (e.g., an item for TCK is “I know about technologies that I can use for understanding and doing mathematics”, and for TPK is “I can choose technologies that enhance the teaching approaches for a lesson.”). Recently, researchers claim that this self-report questionnaire (and the different extensions and adaptions thereof) rather tap into teachers’ self-efficacy beliefs about teaching with technology than their available knowledge (Scherer et al., 2018; Lachner et al., 2019). Based on the conceptualization of self-efficacy beliefs as the subjective perception of one’s own capability to solve a task (Bandura, 2006), researchers recently argue that the self-report TPACK might be highly intertwined with teachers’ self-efficacy beliefs. Related research suggests that the use of self-report TPACK might be challenging during interpreting the results of empirical studies (Abbitt, 2011; Joo, Kim, & Li, 2018; Fabriz et al., 2020). Therefore, the use of self-report TPACK might induce a jingle-jangle fallacy (Gonzalez et al., 2020; Kelley, 1927). Jingle-jangle fallacies describe a lack of extrinsic convergent validity in two different ways: On the one hand two measures which are labeled the same might represent two conceptually different constructs (jingle fallacy). In the present case, self-report TPACK might differ from teachers’ knowledge for technology-enhanced teaching to a larger extent than previous research suggests. On the other hand two measures which are labeled differently might examine the same construct (jangle fallacy). Accordingly, self-report TPACK and self-efficacy beliefs towards technology-enhanced teaching might be similar constructs with comparable implications on teachers’ technology integration (see e.g., Marsh et al., 2020 for an investigation of jingle-jangle fallacies). However, to date a systematic analysis of the problematic validity of self-reported TPACK is missing.

    Against this background, three complementary approaches will be applied in this paper to examine the validity of self-reported TPACK.

    First, we will meta-analytically analyze, how the different knowledge components of the TPACK model (i.e., TK, CK, PK, TCK, TPK, PCK) are related to each other across studies when examined with self-report TPACK questionnaires. Measures that depict TPACK components that are more proximal to each other or are intersections of each other in the model should show higher correlations than more distal measures or measure that don’t intersect (e.g., TK should correlate higher with TCK than with PCK).

    Second, potential jingle fallacies of self-report TPACK and teachers’ knowledge for technology-enhanced teaching will be examined. Hence, it will be investigated if the two measures represent the same concepts as proposed by researchers (e.g., Schmidt et al., 2009). Therefore, the extent to which self-reported TPACK and more objective measures of teachers’ knowledge for technology-enhanced teaching are related to each other will be examined. If, as proposed, self-report TPACK represents teachers’ knowledge for technology-enhanced teaching, self-report TPACK should be highly related to the quality of technolog use for teaching (see e.g., Kunter et al., 2013; Ericsson, 2006 for the importance of teacher knowledge for generic teaching quality). To investigate the relationship of self-reported TPACK and performance-based measures of teachers’ knowledge for technology-enhanced taching, empirical studies that examine these measures such as studies that investigate the role of self-reported TPACK for the quality of technology-enhanced lesson planning (e.g., Backfisch et al., 2020; Kopcha et al., 2014) or test-based approaches (e.g., Akyuz, 2018; Krauskopf & Forssell, 2013; So & Kim, 2009) will be reviewed.

    Third, potential jangle fallacies of self-report TPACK and self-efficacy beliefs towards technology-enhanced teaching will be examined. Therefore, the magnitude of the correlations of self-reported TPACK and self-efficacy beliefs towards technology-enhanced teaching will be compared. Futhermore, the extent to which both measures are related to teachers’ technology integration (e.g., frequency of technology integration) will be analyzed. If self-report TPACK and self-efficacy beliefs are related to the same magnitude to outcome variables, both measures might represent the conceptually similar construct.

    References
    Abbitt, J. T. (2011). Measuring technological pedagogical content knowledge in preservice teacher education: A review of current methods and instruments. Journal of Research on Technology in Education, 43(4), 281–300. https://doi.org/10.1080/15391523.2011.10782573
    Akyuz, D. (2018). Measuring technological pedagogical content knowledge (TPACK) through performance assessment. Computers & Education, 125, 212–225. https://doi.org/10.1016/j.compedu.2018.06.012
    Backfisch, I., Lachner, A., Hische, C., Loose, F., & Scheiter, K. (2020). Professional knowledge or motivation? Investigating the role of teachers’ expertise on the quality of technology-enhanced lesson plans. Learning & Instruction, 66, 101300. https://doi.org/10.1016/j.learninstruc.2019.101300
    Bandura, A. (2006). Guide for constructing self-efficacy scales. In Self-efficacy beliefs of adolescents (pp. 307–337). https://doi.org/10.1017/CBO9781107415324.004
    Ericsson, K. A. (2006). The influence of experience and deliberate practice on the development of superior expert performance. The Cambridge Handbook of Expertise and Expert Performance, 38, 685-705.
    Fabriz, S., Hansen, M., Heckmann, C., Mordel, J., Mendzheritskaya, J., Stehle, S., Schulze-Vorberg, L., Ulrich, I., & Horz, H. (2020). How a professional development programme for university teachers impacts their teaching-related self-efficacy, self-concept, and subjective knowledge. Higher Education Research & Development, 1–15.
    Gonzalez, O., MacKinnon, D. P., & Muniz, F. B. (2020). Extrinsic Convergent Validity Evidence to Prevent Jingle and Jangle Fallacies. Multivariate Behavioral Research, 1–17.
    Joo, Y. J., Park, S., & Lim, E. (2018). Factors influencing preservice teachers’ intention to use technology: TPACK, teacher self-efficacy, and Technology Acceptance Model. Educational Technology and Society, 21(3), 48–59.
    Kelley, T. L. (1927). Interpretation of educational measurements.
    Kopcha, T. J., Ottenbreit-Leftwich, A., Jung, J., & Baser, D. (2014). Examining the TPACK framework through the convergent and discriminant validity of two measures. Computers & Education, 78, 87–96. https://doi.org/10.1016/j.compedu.2014.05.003
    Krauskopf, K., & Forssell, K. (2013). I have TPCK! – What does that mean? Examining the External Validity of TPCK Self-Reports. Proceedings of Society for Information Technology & Teacher Education International Conference 2013, 2190–2197. http://www.stanford.edu/~forssell/papers/SITE2013_TPCK_validity.pdf
    Kunter, M., Klusmann, U., Baumert, J., Richter, D., Voss, T., & Hachfeld, A. (2013). Professional competence of teachers: Effects on instructional quality and student development. Journal of Educational Psychology, 105, 805–820. https://doi.org/10.1037/a0032583
    Lachner, A., Backfisch, I., & Stürmer, K. (2019). A test-based approach of Modeling and Measuring Technological Pedagogical Knowledge. Computers & Education, 103645. https://doi.org/10.1016/j.compedu.2019.103645
    Marsh, H. W., Pekrun, R., Parker, P. D., Murayama, K., Guo, J., Dicke, T., & Arens, A. K. (2019). The murky distinction between self-concept and self-efficacy: Beware of lurking jingle-jangle fallacies. Journal of Educational Psychology, 111(2), 331.
    Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for integrating technology in teacher knowledge. Teachers College Record, 108(6), 1017-1054.
    Petko, D. (2012). Teachers’ pedagogical beliefs and their use of digital media in classrooms: Sharpening the focus of the “will, skill, tool” model and integrating teachers’ constructivist orientations. Computers & Education, 58, 1351–1359. https://doi.org/10.1016/j.compedu.2011.12.013
    Scherer, R., Tondeur, J., & Siddiq, F. (2017). On the quest for validity: Testing the factor structure and measurement invariance of the technology-dimensions in the Technological, Pedagogical, and Content Knowledge (TPACK) model. Computers & Education, 112, 1-17. https://doi.org/10.1016/j.compedu.2017.04.012
    Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological pedagogical content knowledge (TPACK) the development and validation of an assessment instrument for preservice teachers. Journal of Research on Technology in Education, 42(2), 123-149. https://doi.org/10.1080/15391523.2009.10782544
    So, H. J., & Kim, B. (2009). Learning about problem based learning: Student teachers integrating technology, pedagogy and content knowledge. Australasian Journal of Educational Technology, 25(1), 101–116. https://doi.org/10.14742/ajet.1183

    2.2 Research Questions

    source description
    PRISMA-P Provide an explicit statement of the question(s) the review will address with reference to participants, interventions, comparators, and outcomes (PICO)
    PROSPERO State the question(s) to be addressed by the review, clearly and precisely. Review questions may be specific or broad. It may be appropriate to break very broad questions down into a series of related more specific questions. Questions may be framed or refined using PI(E)COS where relevant.
    MARS

    Objectives: State the hypotheses examined, indicating which were prespecified, including

    • Question in terms of relevant participant characteristics (including animal populations), independent variables (experimental manipulations, treatments, or interventions), ruling out of possible confounding variables, dependent variables (outcomes, criterion), and other features of study designs
    • Method(s) of synthesis and if meta-analysis was used, the specific methods used to integrate studies (e.g., effect-size metric, averaging method, the model used in homogeneity analysis)
    • RQ1: How are the different knowledge components of the TPACK model related to each other, when examined with self-report TPACK questionnaires?
    • RQ2: Does the use of self-reported TPACK questionnaire constitute a jingle fallacy with teachers’ knowledge for technology-enhanced teaching?
      • RQ2a: To what extent is self-reported TPACK related to performance-based measures of knowledge for technology-enhanced teaching?
    • RQ3: Does the use of self-reported TPACK questionnaire constitute a jangle fallacy with self-efficacy beliefs?
      • RQ3a: To what extent is self-reported TPACK related to self-efficacy beliefs?
      • RQ3b: To what extent are self-reported TPACK and self-efficacy beliefs differently related to self-reported technology integration?

    3 Methods

    3.1 Eligibility: Inclusion and Exclusion Criteria

    source description
    PRISMA-P Specify the study characteristics (such as PICO, study design, setting, time frame) and report characteristics (such as years considered, language, publication status) to be used as criteria for eligibility for the review
    PROSPERO Give details of the types of study (study designs) eligible for inclusion in the review. If there are no restrictions on the types of study design eligible for inclusion, or certain study types are excluded, this should be stated. The preferred format includes details of both inclusion and exclusion criteria.
    MARS

    Describe the criteria for selecting studies, including

    • Independent variables (e.g., experimental manipulations, types of treatments or interventions or predictor variables)

    • Dependent variable (e.g., outcomes, in syntheses of clinical research including both potential benefits and potential adverse effects)

    • Eligible study designs (e.g., methods of sampling or treatment assignment)

    • Handling of multiple reports about the same study or sample, describing which are primary and handling of multiple measures using the same participants

    • Restrictions on study inclusion (e.g., by study age, language, location, or report type)

    • Changes to the prespecified inclusion and exclusion criteria, and when these changes were made

    • Handling of reports that did not contain sufficient information to judge eligibility (e.g., lacking information about study design) and reports that did not include sufficient information for analysis (e.g., did not report numerical data about those outcomes)

    Added by authors

    Alternative approaches (to PICO) to describe study characteristics:

  • SPIDER: relevant, when including qualitative research (https://doi.org/10.1177/1049732312452938)

  • PICOS: Compared to PICO includes study design and reaches higher specifity (ISBN: 978-1-900640-47-3; https://www.york.ac.uk/media/crd/Systematic_Reviews.pdf)

  • UTOS (ISBN: 978-0875895253)

  • Inclusion citeria:

    • Empirical questionnaire studies that investigate TPACK with a self-report questionnaire such as Schmidt et al. (2009) or extensions of thereof (e.g., Koh et al., 2014; Scherer et al. 2017)
    • Empirical studies that investigate TPACK of teachers or educators across all pedagogical fields (either pre-service, trainee or in-service teachers, principals or teacher educators in all educational fields and school levels)
    • Papers which are written in English
    • Papers that provide the necessary correlations to conduct the analyses, or information that can be used to compute the correlations, or if the authors provide the information on demand
    • Peer-reviewed journal articles, conference proceedings, book chapters and dissertations
    • Papers must be accessible through full text documents.
    • Papers that provide one or more of the following information:
      • Quantitative self-report TPACK (at least one of the components of the TPACK model)
      • Correlations / factorial structure of the different components of TPACK
      • Self-efficacy measure(s) that tap into a component of teachers’ self-efficacy of technology-enhanced teaching (e.g., technological self-efficacy, general teaching-related self-efficacy)
      • Relation of self-report TPACK and self-efficacy measure(s)
      • Outcome measure(s) of teachers’ technology integration (quantity or quality measures; self-report or performance-based measures (e.g., test-based, lesson plans, observer ratings))

    Exclusion criteria:

    • Participants are pupils
    • Case studies

    3.2 Sources of Search: List and Rationale

    source description
    PRISMA-P Not specified.
    PROSPERO Searches: State the sources that will be searched. Give the search dates, and any restrictions (e.g. language or publication period). Do NOT enter the full search strategy (it may be provided as a link or attachment.)
    MARS

    Describe all information sources:

    • Databases searched (e.g., PsycINFO, ClinicalTrials.gov), including dates of coverage (i.e., earliest and latest records included in the search), and software and search platforms used
    • Names of specific journals that were searched and the volumes checked
    • Explanation of rationale for choosing reference lists if examined (e.g., other relevant articles, previous research syntheses)
    • Documents for which forward (citation) searches were conducted, stating why these documents were chosen
    • Number of researchers contacted if study authors or individual researchers were contacted to find studies or to obtain more information about included studies, as well as criteria for making contact (e.g., previous relevant publications), and response rate
    • Dates of contact if other direct contact searches were conducted such as contacting corporate sponsors or mailings to distribution lists
    • Search strategies in addition to those above and the results of these searches
    • Adequate databases: Web of Science, ERiC, ScienceDirect, IEEE Xplore Digital library, LearnTechLib, PsychInfo, ProQuest Dissertations, google scholar (first 100 results)
    • Papers from previous reviews on TPACK

    3.3 Search Strategy

    source description
    PRISMA-P Present draft of search strategy to be used for at least one electronic database, including planned limits, such that it could be repeated.
    PROSPERO URL to search strategy: Give a link to a published pdf/word document detailing either the search strategy or an example of a search strategy for a specific database if available (including the keywords that will be used in the search strategies), or upload your search strategy. Do NOT provide links to your search results. Alternatively, upload your search strategy to CRD in pdf format. Please note that by doing so you are consenting to the file being made publicly accessible.
    MARS Describe all information sources: Search strategies of electronic searches, such that they could be repeated (e.g., include the search terms used, Boolean connectors, fields searched, explosion of terms).

    Search String/Query: (TPACK OR TPCK OR “technological pedagogical content knowledge” OR “technological-pedagogical-content-knowledge”) AND NOT (virus OR health OR chromatography OR cell)

    3.4 Data Management Tools Used

    source description
    PRISMA-P Describe the mechanism(s) that will be used to manage records and data throughout the review.
    PROSPERO Not specified.
    MARS Not specified.

    rayyan (https://rayyan.qcri.org/welcome) to include or exclude and label papers by several raters

    3.5 Selection of Studies

    source description
    PRISMA-P State the process that will be used for selecting studies (such as two independent reviewers) through each phase of the review (that is, screening, eligibility and inclusion in meta-analysis).
    PROSPERO Data extraction (selection and coding): Describe how studies will be selected for inclusion. State what data will be extracted or obtained. State how this will be done and recorded.
    MARS

    Describe the process for deciding which studies would be included in the syntheses and/or included in the meta-analysis, including

    • Document elements (e.g., title, abstract, full text) used to make decisions about inclusion or exclusion from the synthesis at each step of the screening process
    • Qualifications (e.g., training, educational or professional status) of those who conducted each step in the study selection process, stating whether each step was conducted by a single person or in duplicate as well as an explanation of how reliability was assessed if one screener was used and how disagreements were resolved if multiple were used.

    Two independent raters will conduct a two-step approach of inclusion and exclusion based on our inclusion and exclusion criteria: 1) screening all titles and abstracts, 2) screening full texts. As a further safeguard a backward search will be conducted in which review articles and book chapters published in 2020 dealing with TPACK will be screened and searched for appropriate literature that is not yet included in our meta-analysis.

    Besides directly including and excluding papers, raters can choose the “maybe” category. All papers with disagreement of whether to include or exclude and especially the papers in the maybe category will be discussed among raters based on the inclusion criteria until agreement is reached.

    3.6 Method of Extracting Data & Information (from Reports)

    source description
    PRISMA-P Describe planned method of extracting data from reports (such as piloting forms, done independently, in duplicate), any processes for obtaining and confirming data from investigators.
    PROSPERO Not specified.
    MARS

    Describe methods of extracting data from reports, including

    • Variables for which data were sought and the variable categories
    • Qualifications of those who conducted each step in the data extraction process, stating whether each step was conducted by a single person or in duplicate and an explanation of how reliability was assessed if one screener was used and how disagreements were resolved if multiple screeners were used as well as whether data coding forms, instructions for completion, and the data (including metadata) are available, stating where they can be found (e.g., public registry, supplemental materials)
    • Studies will be coded in Rayyan by two independent raters using the detect duplicates, inclusion/exclusion/maybe and labeling function.
    • To extract detailed information on the included studies, a standardized Excel or self-programmed dashboard that produces a relational database will be established and applied by two independent raters.
    • All extracted data and information will be analyzed for inter-rater agreement and discrepancies will be discussed.

    3.7 List and Description of Data and Information Extracted

    source description
    PRISMA-P
    • List and define all variables for which data will be sought (such as PICO items, funding sources), any pre-planned data assumptions and simplifications
    • List and define all outcomes for which data will be sought, including prioritization of main and additional outcomes, with rationale
    PROSPERO
    • Condition or domain being studied: Give a short description of the disease, condition or healthcare domain being studied. This could include health and wellbeing outcomes.
  • Participants/population: Give summary criteria for the participants or populations being studied by the review. The preferred format includes details of both inclusion and exclusion criteria.
  • Intervention(s), exposure(s): Give full and clear descriptions or definitions of the nature of the interventions or the exposures to be reviewed.
  • Comparator(s)/control: Where relevant, give details of the alternatives against which the main subject/topic of the review will be compared (e.g. another intervention or a non-exposed control group). The preferred format includes details of both inclusion and exclusion criteria.
  • Main and additional outcome(s): Give the pre-specified main (most important) outcomes of the review, including details of how the outcome is defined and measured and when these measurement are made, if these are part of the review inclusion criteria.
  • Measures of effect: Please specify the effect measure(s) for you main outcome(s) e.g. relative risks, odds ratios, risk difference, and/or ’number needed to treat.
  • MARS Not specified.
    • Study characteristics:
      • Publication status
      • Publication year
      • Context of using TPACK (e.g., psychometric study, study to explore the relations to other variables [e.g., in a SEM], studies aimed at identifying profiles, …)
    • Sample characteristics:
      • Sample size
      • Profession: teacher, teacher educators…
      • Teaching experience: pre-service teachers, in-service teachers…
      • Age
      • Country
      • Gender (% female)
      • School level: elementary / primary / secondary / university teachers…
      • Subject domains
    • Type of study:
      • Experimental setting / survey
      • Intervention
      • Cross-sectional / longitudinal
    • Measures applied:
      • TPACK self-report (e.g., Schmidt et al., 2009; Scherer et al., 2017)
        • TPACK dimensions covered
        • Reliability (coefficient type, value)
        • Validity evidence/additional variables
      • Type of self-efficacy measures (e.g., general teaching related self-efficacy; technology-related self-efficacy)
        • Reliability (coefficient type, value)
        • Validity evidence/additional variables
      • Type of performance-based measures of TPACK (e.g., lesson plans, test-based)
        • Reliability (coefficient type, value)
        • Validity evidence/additional variables
      • Type of outcome measures (e.g., frequency of technology use general / during teaching; qualitative measure)
        • Reliability (coefficient type, value)
        • Validity evidence/additional variables
    • Quantitative reports: Correlations and effect sizes
      • Self-report TPACK & self-efficacy measure
      • Self-report TPACK & outcome measure
      • Self-efficacy measure & outcome measure
      • Self-report TPACK & performance-based measure of TPACK
      • In each case: type of correlation (e.g., pearson), type of correlation extracted (e.g., directly from paper, computed based on information in paper, based on raw data if provided by authors)

    3.8 Effect Size Transformation from Individual Studies

    source description
    PRISMA-P Not specified.
    PROSPERO Not specified.
    MARS Describe the statistical methods for calculating effect sizes, including the metric(s) used (e.g., correlation coefficients, differences in means, risk ratios) and formula(s) used to calculate effect sizes.

    To investigate the magnitude of the correlations of the different measures investigated, Pearson correlation coefficient (r) will be needed and extracted resp. converted from the information provided in the papers included (based on the approach of Polanin & Snilstveit, 2016). To investigate differences of the synthesized correlations, we will apply Cohen’s q.

    3.9 Risk of Bias in Individual Studies

    source description
    PRISMA-P Describe anticipated methods for assessing risk of bias of individual studies, including whether this will be done at the outcome or study level, or both; state how this information will be used in data synthesis
    PROSPERO Risk of bias (quality) assessment: Describe the method of assessing risk of bias or quality assessment. State which characteristics of the studies will be assessed and any formal risk of bias tools that will be used.
    MARS

    Describe any methods used to assess risk to internal validity in individual study results, including

    • Risks assessed and criteria for concluding risk exists or does not exist
    • Methods for including risk to internal validity in the decisions to synthesize of the data and the interpretation of results
    Added by authors Describe how the quality of original studies are rated. E.g. by ‘The Study Design and Implementation Assessment Device (Study DIAD)’: https://doi.org/10.1037/1082-989X.13.2.130

    We will apply general criteria of study quality (e.g., sample sizes, reliability of applied measures see Valentine & Cooper, 2008 for a comprehensive overview of potential criteria).

    4 Results

    4.1 Strategy for Data Synthesis

    source description
    PRISMA-P
    • Describe criteria under which study data will be quantitatively synthesised.

    • If data are appropriate for quantitative synthesis, describe planned summary measures, methods of handling data and methods of combining data from studies, including any planned exploration of consistency (such as I2, Kendall’s t)

    • If quantitative synthesis is not appropriate, describe the type of summary planned

    PROSPERO

    Strategy for data synthesis: Provide details of the planned synthesis including a rationale for the methods selected. This must not be generic text but should be specific to your review and describe how the proposed analysis will be applied to your data.

    MARS

    Describe narrative and statistical methods used to compare studies. If meta-analysis was conducted, describe the methods used to combine effects across studies and the model used to estimate the heterogeneity of the effects sizes (e.g., a fixed-effect, random-effects model robust variance estimation), including

  • Rationale for the method of synthesis

  • Methods for weighting study results

  • Methods to estimate imprecision (e.g., confidence or credibility intervals) both within and between studies

  • Description of all transformations or corrections (e.g., to account for small samples or unequal group numbers) and adjustments (e.g., for clustering, missing data, measurement artifacts, or construct-level relationships) made to the data and justification for these

  • Additional analyses (e.g., subgroup analyses, meta-regression), including whether each analysis was prespecified or post hoc

  • Selection of prior distributions and assessment of model fit if Bayesian analyses were conducted

  • Name and version number of computer programs used for the analysis

  • Statistical code and where it can be found (e.g., a supplement)

  • Assuming that we identify sufficient empirical studies, we will apply meta-analytical inferential statistics to examine the research questions. Following suggestions by Gonzalez et al. (2020) we will investigate the magnitude of the correlations of different measures depending on the amount of identified studies either with William’s equation or meta-analytical structural equation modelling (MASEM) approaches. Multiple correlations per study will not be aggregated to the study level. Instead, to address dependencies among these effect sizes, robust variance estimation and multilevel meta-analyses will be considered.

    RQ1: Investigate the magnitude of the correlations of the different TPACK components

    RQ2a: Investigate the magnitude of the correlation of self-report TPACK and the quality of technology-enhanced teaching.

    RQ3a: Investigate the magnitude of the correlation of self-report TPACK and self-efficacy

    RQ3b: Compare the magnitude of the correlation of self-report TPACK and outcome measures (e.g., frequency of technology integration) to the magnitude of the correlation of self-efficacy measures and outcome measures

    4.2 Moderators/ Subgroups

    source description
    PRISMA-P Describe any proposed additional analyses (such as sensitivity or subgroup analyses, meta-regression)
    PROSPERO Analysis of subgroups or subsets: State any planned investigation of ‘subgroups’. Be clear and specific about which type of study or participant will be included in each group or covariate investigated. State the planned analytic approach.
    MARS Not specified.
    Different moderators will be investigated to gain more insights on influential factors:
    • Different adoptions of self-report TPACK measures
    • Different types / focus of self-efficacy measures (e.g., general teaching self-efficacy v.s technology-related self-efficacy)
    • Different types of outcome measures of technology integration (e.g., self-report, lesson plans)

    4.3 Assessment of Publication Bias

    source description
    PRISMA-P Specify any planned assessment of meta-bias(es) (such as publication bias across studies, selective reporting within studies)
    PROSPERO Not specified.
    MARS

    Describe risk of bias across studies, including

    • Statement about whether
      1. unpublished studies and unreported data, or
      2. only published data were included in the synthesis and the rationale if only published data were used
    • Assessments of the impact of publication bias (e.g., modeling of data censoring, trim-and-fill analysis)
    • Results of any statistical analyses looking for selective reporting of results within studies

    File-drawer analyses: p-curves for the correlations

    Publication bias of single correlations:
    • Moderator effects of publication status (published vs. grey literature)
    • Fail-safe N analyses
    • Trim-and-fill analyses
    • Asymmetry tests of funnel plots

    5 Discussion

    5.1 Strength of Cumulative Evidence

    source description
    PRISMA-P Describe how the strength of the body of evidence will be assessed (such as GRADE).
    PROSPERO Not specified.
    MARS Describe the generalizability (external validity) of conclusions, including • Implications for related populations, intervention variations, dependent (outcome) variables.

    We will examine the strength of our evidence by computing Cohen’s q. Additionally, we will evaluate the overall strength of evidence based on the GRADE framework (Grading of Recommendations Assessment, Development and Evaluation) and examine the following dimensions: overall risk of bias based on publication bias and quality of included studies, inconsistency of findings across studies (if findings across studies are consistent or not), indirectness (if participants of studies are part of the target population).