Another jingle-jangle fallacy?
Examining the validity of Technological Pedagogical and Content Knowledge (TPACK) self-report assessments
This is a Rmd-template for protocols and reporting of systematic reviews and meta-analyses. It synthesizes three sources of standards:
The template is aimed at
We are aware that MARS targets aspects of reporting after the systemtic review/ meta-analysis is completed rather than decisions and reasoning in the planning phase as PRISMA-P and PROSPERO. MARS nevertheless provides a good framework to determine crucial points for systemtic reviews/ meta-analyses to be addressed as early as in the planning phase.
Standards have been partially adapted. Click ‘show changes’ to see changes and reasons for change.
standard | implemented change | reason |
---|---|---|
MARS | Left out paper section “Abstract” | An abstract is important for reporting, not however, for planning and registering. |
MARS | Left out paper section “Results” and parts of “Discussion” | Specifications on how to report results is important for reporting, not however, for planning and registering. Prospective information on how results will be computed/ synthesized is preserved. |
MARS | Left out “Protocol: List where the full protocol can be found” | This form practically is the protocol. |
PROSPERO | Left out non-mandatory fields or integrated them with mandatory fields. | Avoiding too detailed specifications. All relevant informations will be integrated. |
PROSPERO | Left out some options in “Type and method of review” | Options left out are purely health/ medicine related. |
PROSPERO | Left out “Health area of the review” | This field is purely health/ medicine related. |
source | description |
---|---|
PRISMA-P | Identify the report as a protocol of a systematic review. If the protocol is for an update of a previous systematic review, identify as such |
PROSPERO |
Give the working title of the review, for example the one used for obtaining funding. Ideally the title should state succinctly the interventions or exposures being reviewed and the associated health or social problems. Where appropriate, the title should use the PI(E)COS structure to contain information on the Participants, Intervention (or Exposure) and Comparison groups, the Outcomes to be measured and Study designs to be included. For reviews in languages other than English, this field should be used to enter the title in the language of the review. This will be displayed together with the English language title. |
MARS | Title: State the research question and type of research synthesis (e.g., narrative synthesis, meta-analysis) |
Another jingle-jangle fallacy? Examining the validity of Technological Pedagogical and Content Knowledge (TPACK) self-report assessments
source | description |
---|---|
PRISMA-P | Not specified. |
PROSPERO |
Type and method of review: Select the type of review and the review method from the lists below. Select the health area(s) of interest for your review.
|
MARS | Not specified. |
Meta-analysis
source | description |
---|---|
PRISMA-P | If registered, provide the name of the registry (such as PROSPERO) and registration number. |
PROSPERO | Not specified. |
MARS | Give the place where the synthesis is registered and its registry number, if registered |
NA
source | description |
---|---|
PRISMA-P | Not specified. |
PROSPERO | Give the date when the systematic review commenced, or is expected to commence. Give the date by which the review is expected to be completed. |
MARS | Not specified. |
10/2020 – 06/2021
source | description | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
PRISMA-P | Not specified. | |||||||||||||||||||||
PROSPERO |
Indicate the stage of progress of the review by ticking the relevant Started and Completed boxes. Additional information may be added in the free text box provided. Please note: Reviews that have progressed beyond the point of completing data extraction at the time of initial registration are not eligible for inclusion in PROSPERO. Should evidence of incorrect status and/or completion date being supplied at the time of submission come to light, the content of the PROSPERO record will be removed leaving only the title and named contact details and a statement that inaccuracies in the stage of the review date had been identified. This field should be updated when any amendments are made to a published record and on completion and publication of the review. If this field was pre-populated from the initial screening questions then you are not able to edit it until the record is published.
|
|||||||||||||||||||||
MARS | Not specified. |
Review stage | Started | Completed |
---|---|---|
Preliminary searches | Yes | Yes |
Piloting of the study selection process | Yes | No |
Formal screening of search results against eligibility criteria | No | No |
Data extraction | No | No |
Risk of bias (quality) assessment | No | No |
Data analysis | No | No |
source | description |
---|---|
PRISMA-P |
|
PROSPERO |
|
MARS | Not specified. |
Corresponding author:
Iris Backfisch
University of Tuebingen
https://uni-tuebingen.de/de/169665
Hausserstrasse 43, 72076 Tuebingen
source | description |
---|---|
PRISMA-P | Not specified. |
PROSPERO | Collaborators (name & affilitation) of individuals working on the review, but are not review team member. |
MARS | Not specified. |
NA
source | description |
---|---|
PRISMA-P | If the protocol represents an amendment of a previously completed or published protocol, identify as such and list changes; otherwise, state plan for documenting important protocol amendments. |
PROSPERO | Not specified. |
MARS | Not specified. |
NA
source | description |
---|---|
PRISMA-P |
|
PROSPERO | Funding sources/sponsors: Give details of the individuals, organizations, groups or other legal entities who take responsibility for initiating, managing, sponsoring and/or financing the review. Include any unique identification numbers assigned to the review by the individuals or bodies listed. Grant numbers. |
MARS |
|
This project is part of the “Qualitätsoffensive Lehrerbildung”, a joint initiative of the Federal Government and the Länder which aims to improve the quality of teacher training. The programme is funded by the Federal Ministry of Education and Research. The authors are responsible for the content of this publication.
source | description |
---|---|
PRISMA-P | Not specified. |
PROSPERO | List any conditions that could lead to actual or perceived undue influence on judgements concerning the main topic investigated in the review. |
MARS | Describe possible conflicts of interest, including financial and other nonfinancial interests. |
No
source | description |
---|---|
PRISMA-P | Describe the rationale for the review in the context of what is already known. |
PROSPERO | Not specified. |
MARS |
Problem: State the question or relation(s) under investigation, including
|
Recent research provided evidence that teachers’ professional knowledge regarding the adoption of educational technologies is a central determinant for successful teaching with technologies (Petko, 2012). One prominent conceptualization of teachers’ professional knowledge for teaching with technology is the technological-pedagogical-content-knowledge (TPACK) framework established by Mishra and Koehler (2006). Based on this framework, TPACK encompasses three generic knowledge components (technological knowledge TK, pedagogical knowledge PK, content knowledge CK), three intersections of these knowledge components (technological-pedagogical knowledge TPK, technological-content knowledge TCK, pedagogical-content knowledge PCK) and TPACK as an integrated knowledge component “that represents a class of knowledge that is central to teachers’ work with technology” (p. 1028, Mishra & Koehler, 2006), see Figure 1.
Figure 1. TPACK Model (Mishra & Koehler, 2006; © 2012 by tpack.org)
The most prominent questionnaire to assess TPACK is the self-report questionnaire by Schmidt et al. (2009). This questionnaire encompasses items on the different knowledge components which ask teachers to self-evaluate their confidence to fulfill a task (e.g., an item for TCK is “I know about technologies that I can use for understanding and doing mathematics”, and for TPK is “I can choose technologies that enhance the teaching approaches for a lesson.”). Recently, researchers claim that this self-report questionnaire (and the different extensions and adaptions thereof) rather tap into teachers’ self-efficacy beliefs about teaching with technology than their available knowledge (Scherer et al., 2018; Lachner et al., 2019). Based on the conceptualization of self-efficacy beliefs as the subjective perception of one’s own capability to solve a task (Bandura, 2006), researchers recently argue that the self-report TPACK might be highly intertwined with teachers’ self-efficacy beliefs. Related research suggests that the use of self-report TPACK might be challenging during interpreting the results of empirical studies (Abbitt, 2011; Joo, Kim, & Li, 2018; Fabriz et al., 2020). Therefore, the use of self-report TPACK might induce a jingle-jangle fallacy (Gonzalez et al., 2020; Kelley, 1927). Jingle-jangle fallacies describe a lack of extrinsic convergent validity in two different ways: On the one hand two measures which are labeled the same might represent two conceptually different constructs (jingle fallacy). In the present case, self-report TPACK might differ from teachers’ knowledge for technology-enhanced teaching to a larger extent than previous research suggests. On the other hand two measures which are labeled differently might examine the same construct (jangle fallacy). Accordingly, self-report TPACK and self-efficacy beliefs towards technology-enhanced teaching might be similar constructs with comparable implications on teachers’ technology integration (see e.g., Marsh et al., 2020 for an investigation of jingle-jangle fallacies). However, to date a systematic analysis of the problematic validity of self-reported TPACK is missing.
Against this background, three complementary approaches will be applied in this paper to examine the validity of self-reported TPACK.
First, we will meta-analytically analyze, how the different knowledge components of the TPACK model (i.e., TK, CK, PK, TCK, TPK, PCK) are related to each other across studies when examined with self-report TPACK questionnaires. Measures that depict TPACK components that are more proximal to each other or are intersections of each other in the model should show higher correlations than more distal measures or measure that don’t intersect (e.g., TK should correlate higher with TCK than with PCK).
Second, potential jingle fallacies of self-report TPACK and teachers’ knowledge for technology-enhanced teaching will be examined. Hence, it will be investigated if the two measures represent the same concepts as proposed by researchers (e.g., Schmidt et al., 2009). Therefore, the extent to which self-reported TPACK and more objective measures of teachers’ knowledge for technology-enhanced teaching are related to each other will be examined. If, as proposed, self-report TPACK represents teachers’ knowledge for technology-enhanced teaching, self-report TPACK should be highly related to the quality of technolog use for teaching (see e.g., Kunter et al., 2013; Ericsson, 2006 for the importance of teacher knowledge for generic teaching quality). To investigate the relationship of self-reported TPACK and performance-based measures of teachers’ knowledge for technology-enhanced taching, empirical studies that examine these measures such as studies that investigate the role of self-reported TPACK for the quality of technology-enhanced lesson planning (e.g., Backfisch et al., 2020; Kopcha et al., 2014) or test-based approaches (e.g., Akyuz, 2018; Krauskopf & Forssell, 2013; So & Kim, 2009) will be reviewed.
Third, potential jangle fallacies of self-report TPACK and self-efficacy beliefs towards technology-enhanced teaching will be examined. Therefore, the magnitude of the correlations of self-reported TPACK and self-efficacy beliefs towards technology-enhanced teaching will be compared. Futhermore, the extent to which both measures are related to teachers’ technology integration (e.g., frequency of technology integration) will be analyzed. If self-report TPACK and self-efficacy beliefs are related to the same magnitude to outcome variables, both measures might represent the conceptually similar construct.
References
Abbitt, J. T. (2011). Measuring technological pedagogical content knowledge in preservice teacher education: A review of current methods and instruments. Journal of Research on Technology in Education, 43(4), 281–300. https://doi.org/10.1080/15391523.2011.10782573
Akyuz, D. (2018). Measuring technological pedagogical content knowledge (TPACK) through performance assessment. Computers & Education, 125, 212–225. https://doi.org/10.1016/j.compedu.2018.06.012
Backfisch, I., Lachner, A., Hische, C., Loose, F., & Scheiter, K. (2020). Professional knowledge or motivation? Investigating the role of teachers’ expertise on the quality of technology-enhanced lesson plans. Learning & Instruction, 66, 101300. https://doi.org/10.1016/j.learninstruc.2019.101300
Bandura, A. (2006). Guide for constructing self-efficacy scales. In Self-efficacy beliefs of adolescents (pp. 307–337). https://doi.org/10.1017/CBO9781107415324.004
Ericsson, K. A. (2006). The influence of experience and deliberate practice on the development of superior expert performance. The Cambridge Handbook of Expertise and Expert Performance, 38, 685-705.
Fabriz, S., Hansen, M., Heckmann, C., Mordel, J., Mendzheritskaya, J., Stehle, S., Schulze-Vorberg, L., Ulrich, I., & Horz, H. (2020). How a professional development programme for university teachers impacts their teaching-related self-efficacy, self-concept, and subjective knowledge. Higher Education Research & Development, 1–15.
Gonzalez, O., MacKinnon, D. P., & Muniz, F. B. (2020). Extrinsic Convergent Validity Evidence to Prevent Jingle and Jangle Fallacies. Multivariate Behavioral Research, 1–17.
Joo, Y. J., Park, S., & Lim, E. (2018). Factors influencing preservice teachers’ intention to use technology: TPACK, teacher self-efficacy, and Technology Acceptance Model. Educational Technology and Society, 21(3), 48–59.
Kelley, T. L. (1927). Interpretation of educational measurements.
Kopcha, T. J., Ottenbreit-Leftwich, A., Jung, J., & Baser, D. (2014). Examining the TPACK framework through the convergent and discriminant validity of two measures. Computers & Education, 78, 87–96. https://doi.org/10.1016/j.compedu.2014.05.003
Krauskopf, K., & Forssell, K. (2013). I have TPCK! – What does that mean? Examining the External Validity of TPCK Self-Reports. Proceedings of Society for Information Technology & Teacher Education International Conference 2013, 2190–2197. http://www.stanford.edu/~forssell/papers/SITE2013_TPCK_validity.pdf
Kunter, M., Klusmann, U., Baumert, J., Richter, D., Voss, T., & Hachfeld, A. (2013). Professional competence of teachers: Effects on instructional quality and student development. Journal of Educational Psychology, 105, 805–820. https://doi.org/10.1037/a0032583
Lachner, A., Backfisch, I., & Stürmer, K. (2019). A test-based approach of Modeling and Measuring Technological Pedagogical Knowledge. Computers & Education, 103645. https://doi.org/10.1016/j.compedu.2019.103645
Marsh, H. W., Pekrun, R., Parker, P. D., Murayama, K., Guo, J., Dicke, T., & Arens, A. K. (2019). The murky distinction between self-concept and self-efficacy: Beware of lurking jingle-jangle fallacies. Journal of Educational Psychology, 111(2), 331.
Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for integrating technology in teacher knowledge. Teachers College Record, 108(6), 1017-1054.
Petko, D. (2012). Teachers’ pedagogical beliefs and their use of digital media in classrooms: Sharpening the focus of the “will, skill, tool” model and integrating teachers’ constructivist orientations. Computers & Education, 58, 1351–1359. https://doi.org/10.1016/j.compedu.2011.12.013
Scherer, R., Tondeur, J., & Siddiq, F. (2017). On the quest for validity: Testing the factor structure and measurement invariance of the technology-dimensions in the Technological, Pedagogical, and Content Knowledge (TPACK) model. Computers & Education, 112, 1-17. https://doi.org/10.1016/j.compedu.2017.04.012
Schmidt, D. A., Baran, E., Thompson, A. D., Mishra, P., Koehler, M. J., & Shin, T. S. (2009). Technological pedagogical content knowledge (TPACK) the development and validation of an assessment instrument for preservice teachers. Journal of Research on Technology in Education, 42(2), 123-149. https://doi.org/10.1080/15391523.2009.10782544
So, H. J., & Kim, B. (2009). Learning about problem based learning: Student teachers integrating technology, pedagogy and content knowledge. Australasian Journal of Educational Technology, 25(1), 101–116. https://doi.org/10.14742/ajet.1183
source | description |
---|---|
PRISMA-P | Provide an explicit statement of the question(s) the review will address with reference to participants, interventions, comparators, and outcomes (PICO) |
PROSPERO | State the question(s) to be addressed by the review, clearly and precisely. Review questions may be specific or broad. It may be appropriate to break very broad questions down into a series of related more specific questions. Questions may be framed or refined using PI(E)COS where relevant. |
MARS |
Objectives: State the hypotheses examined, indicating which were prespecified, including
|
source | description |
---|---|
PRISMA-P | Specify the study characteristics (such as PICO, study design, setting, time frame) and report characteristics (such as years considered, language, publication status) to be used as criteria for eligibility for the review |
PROSPERO | Give details of the types of study (study designs) eligible for inclusion in the review. If there are no restrictions on the types of study design eligible for inclusion, or certain study types are excluded, this should be stated. The preferred format includes details of both inclusion and exclusion criteria. |
MARS |
Describe the criteria for selecting studies, including
|
Added by authors |
Alternative approaches (to PICO) to describe study characteristics: SPIDER: relevant, when including qualitative research (https://doi.org/10.1177/1049732312452938) PICOS: Compared to PICO includes study design and reaches higher specifity (ISBN: 978-1-900640-47-3; https://www.york.ac.uk/media/crd/Systematic_Reviews.pdf) UTOS (ISBN: 978-0875895253) |
Inclusion citeria:
Exclusion criteria:
source | description |
---|---|
PRISMA-P | Not specified. |
PROSPERO | Searches: State the sources that will be searched. Give the search dates, and any restrictions (e.g. language or publication period). Do NOT enter the full search strategy (it may be provided as a link or attachment.) |
MARS |
Describe all information sources:
|
source | description |
---|---|
PRISMA-P | Present draft of search strategy to be used for at least one electronic database, including planned limits, such that it could be repeated. |
PROSPERO | URL to search strategy: Give a link to a published pdf/word document detailing either the search strategy or an example of a search strategy for a specific database if available (including the keywords that will be used in the search strategies), or upload your search strategy. Do NOT provide links to your search results. Alternatively, upload your search strategy to CRD in pdf format. Please note that by doing so you are consenting to the file being made publicly accessible. |
MARS | Describe all information sources: Search strategies of electronic searches, such that they could be repeated (e.g., include the search terms used, Boolean connectors, fields searched, explosion of terms). |
Search String/Query: (TPACK OR TPCK OR “technological pedagogical content knowledge” OR “technological-pedagogical-content-knowledge”) AND NOT (virus OR health OR chromatography OR cell)
source | description |
---|---|
PRISMA-P | Describe the mechanism(s) that will be used to manage records and data throughout the review. |
PROSPERO | Not specified. |
MARS | Not specified. |
rayyan (https://rayyan.qcri.org/welcome) to include or exclude and label papers by several raters
source | description |
---|---|
PRISMA-P | State the process that will be used for selecting studies (such as two independent reviewers) through each phase of the review (that is, screening, eligibility and inclusion in meta-analysis). |
PROSPERO | Data extraction (selection and coding): Describe how studies will be selected for inclusion. State what data will be extracted or obtained. State how this will be done and recorded. |
MARS |
Describe the process for deciding which studies would be included in the syntheses and/or included in the meta-analysis, including
|
Two independent raters will conduct a two-step approach of inclusion and exclusion based on our inclusion and exclusion criteria: 1) screening all titles and abstracts, 2) screening full texts. As a further safeguard a backward search will be conducted in which review articles and book chapters published in 2020 dealing with TPACK will be screened and searched for appropriate literature that is not yet included in our meta-analysis.
Besides directly including and excluding papers, raters can choose the “maybe” category. All papers with disagreement of whether to include or exclude and especially the papers in the maybe category will be discussed among raters based on the inclusion criteria until agreement is reached.
source | description |
---|---|
PRISMA-P | Describe planned method of extracting data from reports (such as piloting forms, done independently, in duplicate), any processes for obtaining and confirming data from investigators. |
PROSPERO | Not specified. |
MARS |
Describe methods of extracting data from reports, including
|
source | description |
---|---|
PRISMA-P |
|
PROSPERO |
|
MARS | Not specified. |
source | description |
---|---|
PRISMA-P | Not specified. |
PROSPERO | Not specified. |
MARS | Describe the statistical methods for calculating effect sizes, including the metric(s) used (e.g., correlation coefficients, differences in means, risk ratios) and formula(s) used to calculate effect sizes. |
To investigate the magnitude of the correlations of the different measures investigated, Pearson correlation coefficient (r) will be needed and extracted resp. converted from the information provided in the papers included (based on the approach of Polanin & Snilstveit, 2016). To investigate differences of the synthesized correlations, we will apply Cohen’s q.
source | description |
---|---|
PRISMA-P | Describe anticipated methods for assessing risk of bias of individual studies, including whether this will be done at the outcome or study level, or both; state how this information will be used in data synthesis |
PROSPERO | Risk of bias (quality) assessment: Describe the method of assessing risk of bias or quality assessment. State which characteristics of the studies will be assessed and any formal risk of bias tools that will be used. |
MARS |
Describe any methods used to assess risk to internal validity in individual study results, including
|
Added by authors | Describe how the quality of original studies are rated. E.g. by ‘The Study Design and Implementation Assessment Device (Study DIAD)’: https://doi.org/10.1037/1082-989X.13.2.130 |
We will apply general criteria of study quality (e.g., sample sizes, reliability of applied measures see Valentine & Cooper, 2008 for a comprehensive overview of potential criteria).
source | description |
---|---|
PRISMA-P |
|
PROSPERO |
Strategy for data synthesis: Provide details of the planned synthesis including a rationale for the methods selected. This must not be generic text but should be specific to your review and describe how the proposed analysis will be applied to your data. |
MARS |
Describe narrative and statistical methods used to compare studies. If meta-analysis was conducted, describe the methods used to combine effects across studies and the model used to estimate the heterogeneity of the effects sizes (e.g., a fixed-effect, random-effects model robust variance estimation), including Rationale for the method of synthesis Methods for weighting study results Methods to estimate imprecision (e.g., confidence or credibility intervals) both within and between studies Description of all transformations or corrections (e.g., to account for small samples or unequal group numbers) and adjustments (e.g., for clustering, missing data, measurement artifacts, or construct-level relationships) made to the data and justification for these Additional analyses (e.g., subgroup analyses, meta-regression), including whether each analysis was prespecified or post hoc Selection of prior distributions and assessment of model fit if Bayesian analyses were conducted Name and version number of computer programs used for the analysis Statistical code and where it can be found (e.g., a supplement) |
Assuming that we identify sufficient empirical studies, we will apply meta-analytical inferential statistics to examine the research questions. Following suggestions by Gonzalez et al. (2020) we will investigate the magnitude of the correlations of different measures depending on the amount of identified studies either with William’s equation or meta-analytical structural equation modelling (MASEM) approaches. Multiple correlations per study will not be aggregated to the study level. Instead, to address dependencies among these effect sizes, robust variance estimation and multilevel meta-analyses will be considered.
RQ1: Investigate the magnitude of the correlations of the different TPACK components
RQ2a: Investigate the magnitude of the correlation of self-report TPACK and the quality of technology-enhanced teaching.
RQ3a: Investigate the magnitude of the correlation of self-report TPACK and self-efficacy
RQ3b: Compare the magnitude of the correlation of self-report TPACK and outcome measures (e.g., frequency of technology integration) to the magnitude of the correlation of self-efficacy measures and outcome measures
source | description |
---|---|
PRISMA-P | Describe any proposed additional analyses (such as sensitivity or subgroup analyses, meta-regression) |
PROSPERO | Analysis of subgroups or subsets: State any planned investigation of ‘subgroups’. Be clear and specific about which type of study or participant will be included in each group or covariate investigated. State the planned analytic approach. |
MARS | Not specified. |
source | description |
---|---|
PRISMA-P | Specify any planned assessment of meta-bias(es) (such as publication bias across studies, selective reporting within studies) |
PROSPERO | Not specified. |
MARS |
Describe risk of bias across studies, including
|
File-drawer analyses: p-curves for the correlations
Publication bias of single correlations:source | description |
---|---|
PRISMA-P | Describe how the strength of the body of evidence will be assessed (such as GRADE). |
PROSPERO | Not specified. |
MARS | Describe the generalizability (external validity) of conclusions, including • Implications for related populations, intervention variations, dependent (outcome) variables. |
We will examine the strength of our evidence by computing Cohen’s q. Additionally, we will evaluate the overall strength of evidence based on the GRADE framework (Grading of Recommendations Assessment, Development and Evaluation) and examine the following dimensions: overall risk of bias based on publication bias and quality of included studies, inconsistency of findings across studies (if findings across studies are consistent or not), indirectness (if participants of studies are part of the target population).