Assessing Media Literacy among Students Enrolled in Basic Writing and First-Year Composition
MetadataShow full metadata
Systematic media literacy education at the college level is largely nonexistent in the U.S. Because assessment is necessary for the development of curriculum and standards, it is crucial that researchers develop instruments to measure media literacy improvement through media literacy education. The Critical Evaluation and Analysis of Media (CEAM) scale was designed toward that purpose and measures college students’ self-reported practice of critically evaluating and analyzing visual media messages online for credibility, audience, and technical design elements.
The goals of this study follow: (a) to chart the development of and examine the factor structure of the CEAM scale, (b) to examine the potential of the CEAM scale to be a generalizable instrument to meet the needs of the research community, and (c) to gather baseline data about the self-reported critical viewing practices of students enrolled in basic writing and first-year composition. Each of these goals is addressed in a separate chapter within the dissertation.
With the exception of the first study, which required two data sets (for an exploratory and confirmatory factor analysis), all other studies use the same data set, which was gathered in Fall 2015. In Fall 2014 and Fall 2015, a purposive sample was taken from students enrolled in the first-year composition sequence at a large public institution in central Texas that is designated as an Hispanic-Serving Institution. During Fall 2014, a total of 323 first-semester students completed the scale. During Fall 2015, a total of 322 first-semester students completed the scale.
The study in Chapter II employs a factor analytic framework for identifying dimensions within the construct of media literacy. Using principal axis factoring with an oblique (Promax) rotation, the exploratory factor analysis revealed a three-factor structure. The first factor accounted for 26.234% of the total variance in the data set; the second factor accounted for 4.069% of the total variance in the data set; and the third factor accounted for 3.574% of the total variance in the data set. Items were retained for each of the three factors if the standardized factor loading was greater than .32. After examining reliability and judging the content of each item with a loading weight below .32, five items were removed, leaving 27 items. Overall reliability for the revised 27-item scale is high (α = .91). Reliability for Factor 1 is good (α = .87). Reliability for Factor 2 (α = .79) and Factor 3 (α = .74) is acceptable.
Using principal axis factoring with an oblique (Promax) rotation, the confirmatory factor analysis also revealed a three-factor structure. The three factors were named: (a) questioning credibility, (b) recognizing audience, and (c) recognizing design. The first factor accounts for 31.401% of the total variance in the data set; the second factor accounts for 5.926% of the total variance in the data set; and the third factor accounts for 5.130% of the total variance in the data set. The standardized factor loadings for most items were above .32. Overall reliability for the 27-item scale is high (α = .91). Reliability for Factor 1 (α = .8) and Factor 3 (α = .81) is good. Reliability for Factor 2 (α = .78) is acceptable. Overall, the underlying structure of the instrument suggests that there are measureable skills for critically analyzing and evaluating visual media messages. These skills cut across type of media message (news, entertainment, and advertisement), which suggests that students can use the same set of skills to critically analyze and evaluate different types of media messages.
Building on the results of the study in Chapter II, the study in Chapter III uses item response theory (IRT) analysis to determine the generalizability of the CEAM scale. A unidimensional IRT model was fit to item-level data. Results of the analysis revealed an IRT-based score reliability for the 27-item scale as high (α = .93). Additionally, all standardized factor loadings were observed as .42 or above. Examination of expected a posteriori (EAP) values revealed that, as expected, a student with a lower perceived media literacy level will score lower, and a student with a higher perceived media literacy level will score higher. All items on the CEAM scale exhibit moderate discrimination parameter values or higher. Additionally, one trend in the discrimination parameter values is that items about advertising tend to have the highest capacity for differentiating between students of higher and lower perceived media literacy levels. However, items regarding credibility of news stories tend toward only moderately differentiating between students of higher and lower perceived media literacy levels. A second trend in the discrimination parameter values is that items that consider why media messages appeal to different audiences tend to have a high capacity for differentiating between students of higher and lower perceived media literacy levels. Item information function (IIF) and item characteristic curves (ICC) also support the discrimination parameter values and EAP values. The findings of this study do support the use of this instrument as a generalizable, sample-free instrument, and this analysis also yielded information about trends in how students may engage with different types of media or different media literacy practices at different levels of confidence.
Chapter IV describes the results of an independent samples t-test used to compare the responses on the CEAM scale for students enrolled in basic writing and first-year composition. There was not a statistically significant difference in the scores for students enrolled in basic writing (M = 3.12, SD = 0.68) and students enrolled in first-year composition (M = 3.37, SD = 0.62) for the total average on the scale; t(320)= -1.998, p = .047. Additionally, there was not a statistically significant difference in the scores for students enrolled in basic writing (M = 3.05, SD = 0.79) and students enrolled in first-year composition (M = 3.13, SD = 0.78) for Factor 2 (Recognizing Audience); t(320)= -.86, p = .388. However, there was a statistically significant difference in the scores for students enrolled in basic writing (M = 3.18, SD = 0.76) and students enrolled in first-year composition (M = 3.35, SD = 0.71) for Factor 1 (Questioning Credibility); t(320)= -2.03, p = .044. There was also a statistically significant difference in the scores for students enrolled in basic writing (M = 3.04, SD = 0.72) and students enrolled in first-year composition (M= 3.21, SD = 0.71) for Factor 3 (Recognizing Design); t(320)= -2.04, p = .042. However, the effect size for each of these results was small. These results have implications for future research about media literacy in the composition sequence, and for research about digital and media divides that exist between students who come from different backgrounds.