Page 1 of 41

Advances in Social Sciences Research Journal – Vol. 9, No. 9

Publication Date: September 25, 2022

DOI:10.14738/assrj.99.13066. Denton, C. A., Muis, K. R., Dube, A., & Armstrong, S. (2022). En-Garde: Source Evaluations in the Digital Age. Advances in Social

Sciences Research Journal, 9(9). 320-360.

Services for Science and Education – United Kingdom

En-Garde: Source Evaluations in the Digital Age

Courtney A. Denton

Department of Educational and Counselling Psychology

McGill University, 3700 McTavish Street, Montréal, QC, Canada H3A 1Y2

Krista R. Muis

Department of Educational and Counselling Psychology

McGill University, 3700 McTavish Street, Montréal, QC, Canada H3A 1Y2

Adam Dubé

Department of Educational and Counselling Psychology

McGill University, 3700 McTavish Street, Montréal, QC, Canada H3A 1Y2

Skylar Armstrong

Department of Psychology, McGill University

2001 Avenue McGill College, Montréal, QC H3A 1G1

ABSTRACT

Students have difficulty assessing the quality of information. They often rely on

content-focused criteria to make reliability assessments and, as a result, may accept

inaccurate information. Despite the impact of poor source evaluation skills,

educational researchers have not widely examined source evaluation behaviours in

authentic environments or tasks. Students’ epistemic cognition, or their thinking

about the epistemic properties of specific knowledge claims and sources, is one

promising avenue to better understand their source evaluation behaviours. Two

studies were conducted to explore students’ epistemic thinking. In Study 1, college

students (n = 12) reported their reliability criteria in focus group interviews. Four

of these participants (n = 4) also examined the reliability of an online news article.

Grounded theory was used to infer students’ epistemic ideals and reliable epistemic

processes. In Study 2, students (n = 43) rank-ordered two news articles and justified

how they assigned each article’s rank in a written response. Most students were

able to accurately rank-order the articles using relevant epistemic processes.

Cluster analysis was used to characterize the evaluation criteria used. Surprisingly,

more participants who justified their decisions using relevance criteria accurately

rank-ordered the articles. The role of direct and indirect indicators of reliability are

discussed through the lens of the Apt-AIR framework of epistemic thinking.

Keywords: epistemic cognition; source evaluation; digital literacy; mixed methods

research

Access to the internet has changed the way students interact with the world around them. With

more opportunities than ever to access, create, and share content, internet users can be seekers

and sources of information. The pervasiveness of these roles has reinvigorated educational

efforts to specifically develop students’ ability to discern whether information is reliable.

Page 2 of 41

321

Denton, C. A., Muis, K. R., Dube, A., & Armstrong, S. (2022). En-Garde: Source Evaluations in the Digital Age. Advances in Social Sciences Research

Journal, 9(9). 320-360.

URL: http://dx.doi.org/10.14738/assrj.99.13066

Researchers have documented students’ difficulty gauging the reliability of sources (Braasch et

al., 2013; Halverson et al., 2010; Mason et al., 2011). Specifically, researchers have identified

the tendency to rely on content-focused features, such as comprehensibility (Machackova &

Smahel, 2018; Subramaniam et al., 2015), and surface-level epistemic features, such as

publisher (Bråten et al., 2009), to assess reliability. One way to foster adaptive source

evaluations on the internet is by improving epistemic cognition (Greene & Yu, 2016; Sinatra &

Chinn, 2012). Epistemic cognition refers to thinking about the acquisition, justification, and use

of knowledge (Hofer, 2016).

Students who do not engage in this specific form of thinking are more susceptible to accept and

disseminate false information, which can have local and societal impacts (Chinn & Barzilai,

2018). For example, believing that the installation of 5G towers led to the pandemic may

influence the daily safety measures observed as well as voting decisions. Undoubtedly, the

potential impact of improving students’ epistemic cognition has stimulated research to better

understand the nature of their thinking about content on the internet (e.g., Cho et al., 2018;

Greene et al., 2014, 2018) and to boost this crucial digital literacy skill (e.g., Barzilai et al., 2020;

Mason et al., 2014; Wiley et al., 2009). Yet, to explore this digital literacy skill, education

researchers have primarily conducted studies offline (e.g., Mason et al., 2018) or used curated

materials that may not reflect authentic information found on the internet (e.g., E. H. Jung et al.,

2016; Thon & Jucks, 2017). Given the situated nature of students’ epistemic cognition

(Sandoval, 2017), the implications of such studies may not apply to source evaluations on the

internet. To address this gap in the literature, we investigated college students’ epistemic

thinking about source evaluations using the Apt-AIR framework.

LITERATURE REVIEW

Critically assessing the quality of online information requires engaging in a variety of cognitive

and metacognitive processes. These processes can consist of epistemic thinking or cognitive

and metacognitive thinking about the epistemic properties of specific information, knowledge

claims, and sources (Barzilai & Zohar, 2014). For example, a student may start by examining

surface-level features, such as the content’s publication date (cognition), and then assess the

alignment of the information with their task (metacognition). Next, they may examine the

author’s expertise (epistemic cognition) and monitor the results of their evaluation to move

forward accordingly (epistemic metacognition). According to Barzilai and Zohar (2014), a

student’s epistemic thinking processes interact such that their epistemic ideals influence the

reliable epistemic processes in which they engage. In their Apt-AIR framework, Barzilai and

Chinn (2018) elaborated on the cognitive and metacognitive aspects of students’ epistemic

aims, ideals, and reliable processes during source evaluations.

Situated Epistemic Thinking

Educational theorists have pushed for a situated view of epistemic cognition to account for

researchers’ context-dependent findings (Elby & Hammer, 2010; Hammer & Elby, 2002;

Sandoval, 2014, 2017). Barzilai and Chinn (2018) addressed this call in their Apt-AIR

framework, in which they integrated their previous frameworks: the AIR model (Chinn et al.,

2011, 2014; Chinn & Rinehart, 2016), and the Multifaceted Framework of Epistemic Thinking

(Barzilai & Zohar, 2014, 2016). In their Apt-AIR framework, Barzilai and Chinn (2018)

acknowledged that some epistemic thinking may apply to multiple domains, whereas other

epistemic thinking remains domain-specific. Chinn and Sandoval (2018) refined this position,

Page 3 of 41

322

Advances in Social Sciences Research Journal (ASSRJ) Vol. 9, Issue 9, September-2022

Services for Science and Education – United Kingdom

explaining that students’ epistemic processes may appear similar between domains or contexts,

but the details of the processes differ substantially. For example, students might engage in

similar source evaluations in science and history contexts. However, they can engage in

different reliable epistemic processes to evaluate trustworthiness. Given the variety of

situations students encounter information, they are required to competently and adaptively

apply appropriate epistemic aims, ideals, and processes to obtain epistemic achievements

(Barzilai & Chinn, 2018). Students’ apt use of epistemic processes support their ability to

accurately evaluate and create information.

To elaborate, in the AIR model, Chinn and colleagues (2014, 2016) describe the cognitive

processes that surround achievement of an epistemic aim. Their model includes epistemic aims,

ideals, and reliable epistemic processes. Epistemic aims refer to the objectives and importance

a student sets for their cognition or action (e.g., knowledge, Chinn et al., 2014), and their aims

can influence how they process information (Greene et al., 2014, 2018). For example, a

student’s epistemic aim may be to determine whether they can use information from an

unfamiliar health website to make an informed health decision. Epistemic ideals refer to the

criteria or standards students use to examine whether their epistemic aims have been met (e.g.,

adequacy of evidence, Chinn et al., 2014). Chinn and colleagues (2014) explained that a

students’ epistemic ideals are the criteria that they use to justify their acceptance or rejection

of an epistemic product (e.g., claim or entire webpage, Barzilai & Chinn, 2018). To assess

information quality, a student may enact reliable epistemic processes, such as consistency

checking or integrating multiple sources, to achieve their aims or produce epistemic products

(Barzilai & Zohar, 2014; Richter & Schmid, 2010b). Whereas Chinn and colleagues (2014, 2016)

focused on epistemic achievements, Barzilai and Zohar (2014, 2016) emphasized the

antecedents of successful achievements.

Barzilai and Zohar’s framework (2014, 2016) contributed cognitive and metacognitive aspects

of epistemic thinking to the Apt-AIR model. Their framework described cognitive epistemic

strategies and processes that can be used to scrutinize specific knowledge claims and sources.

Following Flavell and colleagues (1979, Flavell et al., 2002), Barzilai and Zohar (2014, 2016)

also delineated three aspects of epistemic metacognition: epistemic metacognitive skills,

epistemic metacognitive knowledge, and epistemic metacognitive experiences. Epistemic

metacognitive skills refer to a student’s planning, monitoring, and evaluating of the epistemic

strategies and processes they engage in (Barzilai & Chinn, 2018; Barzilai & Zohar, 2014, 2016).

For example, Cho and colleagues (2018) found that students employed planning and

monitoring to integrate multiple perspectives and examined the accuracy of knowledge claims

and sources to establish reliability.

Epistemic metacognitive knowledge refers to a student’s metacognitive knowledge about the

nature of knowledge and knowing (Barzilai & Chinn, 2018; Barzilai & Zohar, 2014, 2016).

During a source evaluation, a student’s metacognitive knowledge that online information is

created for a variety of purposes may stimulate their evaluation of an author’s resulting biases.

Their epistemic beliefs about knowledge in general may influence the types of processes they

engage in as well as their epistemic metacognitive experiences. Finally, epistemic

metacognitive experiences refer to a student’s emotions that are evoked as they build

knowledge (Barzilai & Chinn, 2018; Barzilai & Zohar, 2014, 2016). For example, a student who

believes the nature of knowledge is complex or uncertain may experience less anxiety when

Page 4 of 41

323

Denton, C. A., Muis, K. R., Dube, A., & Armstrong, S. (2022). En-Garde: Source Evaluations in the Digital Age. Advances in Social Sciences Research

Journal, 9(9). 320-360.

URL: http://dx.doi.org/10.14738/assrj.99.13066

confronted with conflicting perspectives than an student who does not hold those beliefs (Muis

et al., 2015). Taken together, Barzilai and Chinn’s (2018) theoretical work illuminates how

students’ epistemic thinking could influence the quality of their source evaluations.

Source Evaluations on the Internet

When examining the reliability of information online, students may compare a source’s

content-based, design-based, and epistemic features to their tacit or explicit epistemic ideals.

For example, to be deemed trustworthy, a student may adopt the epistemic ideal that a reliable

health website cites high-quality evidence to support its’ claims. To examine whether this

epistemic ideal has been met, the student may scrutinize the sources cited in a reference list or

click on embedded hyperlinks to see where that evidence came from. Researchers have

documented students’ use of a variety of evaluation criteria during source evaluations,

frequently noting students use of epistemic ideals (e.g., author’s expertise, message accuracy,

or purpose, Halverson et al., 2010, Ulyshen et al., 2015), content-based (e.g., Barnes et al., 2003;

Kiili et al., 2008) and design-focused criteria (e.g., Gerjets et al., 2011; Cunningham & Johnson,

2016). Despite students’ reliance on epistemic and non-epistemic evaluation criteria, some

researchers have suggested that students do not consider epistemic features at all when

evaluating the reliability of new information (e.g., Bråten et al., 2016; Wineburg, 1991) or use

limited epistemic ideals to justify their acceptance or rejection of information (e.g., Barzilai &

Eshet-Alkalai, 2015; Britt & Aglinskas, 2002; Greene et al., 2014, 2018). Yet, other researchers

have observed high rates of students’ epistemic ideal use (e.g., Kąkol et al., 2017; Halverson et

al., 2010).

Mason and colleagues (2011) asked students to think out loud as they examined eight curated

webpages presented in an offline environment. The researchers varied the webpages’

authoritativeness, position toward the topic, and the evidence provided to gather students’

spontaneous reflections about the sources. Their analyses revealed that most students reflected

on at least one epistemic ideal while examining the webpages, such as whether the source and

its’ evidence were scientific. Mason and colleagues’ earlier work (2010) acknowledged that

students require new skills to evaluate the authority or accuracy of internet sources, yet these

researchers continued to design offline environments to assess such ideals and behaviours

(Mason et al., 2011, 2018). Like Mason and colleagues, education researchers have

predominantly examined epistemic ideals in controlled offline environments, including

multiple documents contexts (e.g., Braasch et al., 2013; Bråten et al., 2009; Wiley et al., 2009)

and hypermedia environments (e.g., Barzilai et al., 2020). As a result, findings about epistemic

cognition in curated contexts have been inappropriately extended to a distinct environment—

the unfiltered quagmire of the internet. Consequently, source evaluation trainings have been

developed based on findings from these controlled environments (Mason et al., 2014; Wiley et

al., 2009; Zhang & Duke, 2011), which undermines the efficacy of these trainings for internet

source evaluations.

Whereas researchers using online environments have documented higher rates of students’

epistemic ideal use (e.g., Kąkol et al., 2017), Halverson and colleagues (2010) identified

university students’ inappropriate use of epistemic ideals to evaluate online sources. To

establish reliability, the researchers observed more than half the students employ important

epistemic ideals, including assessing the source’s credibility, followed by its’ accuracy,

objectivity and/or perspective of information presented, alongside content-based criteria.

Page 5 of 41

324

Advances in Social Sciences Research Journal (ASSRJ) Vol. 9, Issue 9, September-2022

Services for Science and Education – United Kingdom

Despite the prevalence of epistemic ideals in students’ written reports, the researchers

highlighted the discrepancy between students’ descriptions of selected sources as objective and

credible and the contents of the source (e.g., biased data). The researchers attributed this

finding to students’ topic-specific beliefs; however, their metacognitive knowledge about what,

when and how to use these epistemic ideals may have also played a role in students’ inaccurate

website assessments. Although similarities between online and offline source evaluations exist,

the prevalence of students’ appropriate epistemic ideal use during online source evaluations is

unclear. Barzilai and Chinn (2018) have outlined key guidelines to assess students’ epistemic

processes and developed offline interventions (Barzilai et al., 2020) to assess epistemic

scaffolds using their guidelines; however, further educational research is needed to better

understand the variety and use of students’ epistemic processes in environments and tasks that

more closely represent their online experiences. That is, given the situated nature of thinking

about source evaluations on the internet, a better understanding about the prevalence and

variation of students’ epistemic ideal use is critical prior to developing interventions to be used

in more authentic contexts.

THE PRESENT STUDIES

The purpose of the present research was to examine college students’ epistemic thinking

related to source evaluations. College students were selected because research has

demonstrated these students’ limited use of appropriate epistemic ideals during source

evaluations (Braasch et al., 2013; Halverson et al., 2010). In Study 1, students’ metacognitive

knowledge about epistemic ideals and processes were collected via focus group interviews. In

Study 2, students’ epistemic ideals were investigated during their evaluation of two authentic

news articles.

The following research questions guided the studies:

1. What characterizes college students’ epistemic metacognitive knowledge about

source evaluations on the internet?

2. How do college students’ epistemic ideals contribute to their overall source

evaluations?

Based on previous findings, we expected students to describe a variety of epistemic and non- epistemic criteria and processes. We hypothesized that students would emphasize non- epistemic criteria to assess reliability. However, we also hypothesized that students who rely

on epistemic ideals to outperform those who rely on content-based criteria.

METHODS

Study Design

As a research team, we approached our investigation from a pragmatist perspective, drawing

on the strengths of diverse frameworks to understand students’ source evaluations (Johnson &

Onwuegbuzie, 2004). Following Creswell and Plano Clark’s (2017) guidelines, we used a

multiphase mixed methods design to assess students’ epistemic ideals and reliable epistemic

processes. A flow diagram depicting the design is presented in Figure 1. We use two notations

to represent how emphasis was placed on each data collection and analysis method (Creswell

& Plano Clark, 2011; Morse, 2003). For example, our use of “QUAL” in phase 2 indicates the

emphasis on qualitative methods, whereas our use of “QUANT” indicates emphasis on

quantitative analysis. Each phase was independently analyzed prior to integration of the

Page 6 of 41

325

Denton, C. A., Muis, K. R., Dube, A., & Armstrong, S. (2022). En-Garde: Source Evaluations in the Digital Age. Advances in Social Sciences Research

Journal, 9(9). 320-360.

URL: http://dx.doi.org/10.14738/assrj.99.13066

results. The current investigation represents two phases of a larger study aimed at developing

and implementing a training to improve source evaluations on the internet.

Figure 1: Schematic of Multiphase Mixed Methods Design

Note. Figure adapted from White and colleagues’ (2019) Figure 1.

Research Context and Setting

The studies took place at a publicly funded college in Québec (i.e., CEGEP). The CEGEP offers

both pre-university and career programs, with approximately 6700 students are enrolled in

two-year pre-university programs each semester. Students can choose concentrations in arts

and sciences, liberal arts, social sciences, and visual arts, among others. In 2017, about 83% of

enrolled students were 17-20 years of age. The student population represented more than 85

nationalities, with about 65% of students reporting English as their mother tongue.

Researcher Positionality

Our research team approached these studies with varied connections to the research setting.

The second and third author had prior relationships with the college and the instructors

QUAL Data Collection

Procedures

• Recruit students (n = 12) from nine

psychology courses

• Conduct one-hour semi-structured

interviews

Products

• Transcripts

QUAL Data Analysis

Procedures (nVivo 12)

• Grounded theory approach

(Glaser, 1978)

• Inter-rater reliability

Products

• Codebook

• Coded transcripts

• Coded source comparison

• Cohen’s kappa coefficient

QUAL & quant Data Collection

Procedures

• Recruit students (n = 43) from two

psychology courses

• Collect students’ source comparison

Products

• Source rank-order

• Rank-order justification

QUAL & QUANT Data Analysis

Procedures (NVivo 12 & SPSS)

• Content analysis

• Inter-rater reliability

• Descriptive statistics

• Cluster analysis

Products

• Codebook

• Coded justifications

• Cohen’s kappa coefficient

• Between-subject profiles

Phase 1 (Present paper) Phase 2 (Present paper)

Data Interpretation

& Integration

QUANT & qual Data Collection

Procedures

• Recruit adults (n = 64)

• Collect pretest measures, source

evaluation and written response

Products

• Prior knowledge measure

• Attitude measure

• Source rank-order

• Rank-order justification

• Written response

QUANT & qual Data Analysis

Procedures (SPSS & nVivo 12)

• Content analysis

• Descriptive statistics

• Analysis of (co)variance

• Multivariate analysis of covariance

Products

• Cronbach’s alpha

• Cohen’s kappa coefficient

• Codebook

• Coded justifications & written

responses

Phase 3

Data Interpretation

& Integration

Page 8 of 41

327

Denton, C. A., Muis, K. R., Dube, A., & Armstrong, S. (2022). En-Garde: Source Evaluations in the Digital Age. Advances in Social Sciences Research

Journal, 9(9). 320-360.

URL: http://dx.doi.org/10.14738/assrj.99.13066

Table : Focus Group Participant Descriptions by Class

Participant

pseudonym

Age Sex Program Year of

study

Previous experience with source

evaluations

Class 1

Jose 18 Male Social

sciences

2nd Enrolled in literature course that

examines reliability and “truth” in

American non-fiction (e.g., memoirs)

Cameron 18 Male Social

sciences

2nd Completed research methods course that

examined the process of finding reliable

sources and reducing bias

Class 2

Sharon 19 Female Social

sciences

2nd Attended six lessons presented by

college’s librarians on finding peer

reviewed sources; Completed same course

as Cameron

Michelle 18 Female Arts &

sciences

2nd Evaluated primary and secondary sources

for literature course term paper

Class 3

Amanda 17 Female Social

sciences

1st Completed sociology course that explored

problem-solving using multiple

perspectives

Charles 18 Male Social

sciences

2nd Attended lesson presented by high school

librarian about finding sources in French

Class 4

Dolores 17 Female Liberal arts 1st Attended lesson presented by college’s

librarians about the CRAAP test

Betty 18 Female Social

sciences

2nd Taught younger sibling about importance

of authority when evaluating

controversial evidence

Vera 17 Female Liberal arts 1st Evaluated multiple perspectives for term

paper on controversial topic; Attended

same lesson as Dolores

Josephine 18 Female Social

sciences

2nd Enrolled in social psychology course that

examines the role of attitudes and bias in

behavior

Class 5

Will 18 Male Social

sciences

2nd Used multiple forums, with varying levels

of reliability, to answer personal inquiries

Class 6

Jennie 18 Female Social

sciences

2nd Enrolled in different section of the same

course as Josephine

Focus Group Interviews

Semi-structured interviews were used to understand participants’ metacognitive knowledge of

strategies and tasks related to source evaluation on the internet. Drawing from Barzilai and

Zohar’s (2012) interview protocol, nine open-ended questions were developed to explore

participants’ (a) criteria for establishing reliability (e.g., What features does a reliable website

have?), (b) procedure for establishing reliability (e.g., What would you do if you found two

websites that made conflicting claims?), and (c) beliefs about the influence of individual

Page 9 of 41

328

Advances in Social Sciences Research Journal (ASSRJ) Vol. 9, Issue 9, September-2022

Services for Science and Education – United Kingdom

differences on source evaluations (e.g., How do biases influence how information is created and

interpreted?). Focus group interviews were audio-recorded and transcribed verbatim for

qualitative analysis. Participants’ epistemic metacognitive knowledge was inferred from their

responses. See Appendix A for the full interview protocol.

The grounded theory approach was used to guide data collection and analysis. One transcript

was independently evaluated by the first and third authors over three stages: initial, focused,

and theoretical coding (Glaser, 1978). First, the raters examined the transcription line-by-line

to identify emerging themes brought up by participants. Emerging themes included stating

evaluation criteria, describing the evaluation procedure, and identifying individual differences.

These themes were discussed to develop a preliminary focused coding scheme. See Table 2 for

a list of selected codes with illustrative examples. In the second phase, the raters synthesized

larger segments of the text, examining each segment and constantly comparing that incident to

previously coded segments (Glaser & Strauss, 2017). The raters again examined any

disagreements to revise the coding scheme. Using the updated coding scheme, the first author

coded the remainder of the transcripts in NVivo 12, calculated the data saturation ratio (>5%

new themes, Guest et al., 2020), and added novel codes to the coding scheme.

To establish inter-rater reliability, the two raters coded the initial transcript a third time using

the revised coding scheme. Their agreement, as measured by Cohen’s kappa coefficient, was

initially established at .62 and all disagreements were discussed before another round of

coding, which established the raters’ agreement at .79, with substantial agreement (Landis &

Koch, 1977). All disagreements were resolved and used to inform the final coding scheme. The

first author reanalyzed the remainder of the transcripts using constant comparison. In the final

phase, the raters integrated the focused codes using a combination of Glaser’s (1978) process

and dimension coding families. See Appendix B for the full coding scheme.

Table 2: Selected Interview Codes with Illustrative Examples

Micro-codes Illustrative examples (Participant pseudonym)

Macro-code: Stating evaluation criteria

1. Author expertise

2. Corroboration

3. Currency

4. Design

5. Evidence quality

6. Funding

1. Well it can have like the... like words about the author like “Oh

he studied this for this, and oh he’s got a bachelor’s degree in

this” and then you’re kind of just like “oh, okay he knows what

he’s talking about.” (Amanda)

2. Yeah, you could but in like really good research you’re at least

going to have one other person to support and say like “oh I

found this as well.” (Sharon)

3. So we have to look at the currency. (Dolores)

4. Another thing is like, if you read through the article, this is a

really particular thing that bothers me. But they cut the text up

like, between pictures and quotes and ads. And then there’s also

the fact that even then, their using up a lot of space to make it

seem longer and more professional but they’re really saying

something really simple and they’re not really communicating

anything, they’re just word vomiting and what they’re saying

doesn’t make, necessarily make coherent sense. It’s just there,

this is information and it may be a little biased and it’s just not

professional. (Dolores)

Page 10 of 41

329

Denton, C. A., Muis, K. R., Dube, A., & Armstrong, S. (2022). En-Garde: Source Evaluations in the Digital Age. Advances in Social Sciences Research

Journal, 9(9). 320-360.

URL: http://dx.doi.org/10.14738/assrj.99.13066

7. Objectivity

8. Peer review

9. Purpose

10. Tone

11. Truthfulness

12. Type

13. Venue

14. Writing quality

5. Well like in terms of that I think if it’s such a divided opinion

maybe look at how they came to the conclusions and sort of

then decide which method is more reliable. (Michelle)

6. I mean I think it plays a really big role. Like sometimes there’s

like when you look on websites, you’ll see like popup ads and

um especially on my, like when you get into like really

untrustworthy ones, they have a lot of them and they have

flashy and really catching titles. Like even I’m like “I so don’t

want to see that” and that’s sort of how they make money, by

you clicking on it. So, when I see like a lot of ads I sort of, not

trust it. (Josephine)

7. Academic articles, no colour, black and white. It’s just like facts,

this is why they’re so trustworthy right, there’s no sugar

coating on it. (Cameron)

8. Exactly. You have to... A book or an academic article has to go

through a process before being put out into the world but um...

someone writing for Buzzfeed or someone making a video in

their basement ranting about something they don’t like, that’s

not going through anyone else it’s just them and their

information and what they want to say and then it’s out there

and anyone can see it. And if you don’t think about that in

context you can easily think “well these two pieces of

information, I found them in the same place so even though one

is a book and one is a, not a journal article but just a random

Internet article, then they’re about the same thing, I found them

in the same spot, then they’re probably about the same value”.

But you have to think about the process that one of them had to

go through. Like a book had to be written, and then edited by

the person, and the probably edited by an actual editor, and

then had to be approved by a publisher and then... (Vera)

9. The person has nothing to gain usually is trustworthy.

(Josephine)

10. Ooh. I read more about it, try to see other people’s perspectives

about it or maybe it’s the way they worded it that made it seem

fishy. (Amanda)

11. Um, well if they are honest about where they get their

information from so they’re gonna cite where they got the

information. (Betty)

12. Let’s say if I read um a blog online, and then I read a book right

after, I’d more likely believe the book because, from what I’ve

been taught, it’s something that’s more valid than something

just written by like I remember I don’t know if I, one of my old

teachers say “ you never know who’s writing on the Internet, it

could be under a pseudonym or anything” and it’s a lot safer to

trust in a book than something online. (Jose)

13. Just like, they’ve proven themselves to not be faulty and they’ve

been giving accurate information in the past. (Sharon)

14. sometimes the quality of the writing you can find, if there’s a lot

of spelling mistakes or something or improper punctuation, I

look for that sometimes and you can kind of tell that it wasn’t

written properly. (Jose)

Page 16 of 41

335

Denton, C. A., Muis, K. R., Dube, A., & Armstrong, S. (2022). En-Garde: Source Evaluations in the Digital Age. Advances in Social Sciences Research

Journal, 9(9). 320-360.

URL: http://dx.doi.org/10.14738/assrj.99.13066

To help reduce the likelihood of experiencing what Jennie described, two students elaborated

on ways to internally examine the quality of a source’s evidence. Within sources, Betty

distinguished trustworthy sources’ citation practices from untrustworthy sources:

Um, well if they are honest about where they get their information from so they’re gonna cite

where they got the information. Cause a lot of times you’re getting it [the information] like

second hand, third hand, whatever. But if you can see the trail from where they got the original

information. Like if you were lying, you’d be afraid to show your sources, if you have any. (18,

2nd year)

Vera identified the source’s peer review process as another internal factor to differentiate the

quality of sources:

You have to... A book or an academic article has to go through a process before being put out

into the world but um... someone writing for Buzzfeed or someone making a video in their

basement ranting about something they don’t like, that’s not going through anyone else it’s just

them and their information and what they want to say and then it’s out there and anyone can

see it. And if you don’t think about that in context you can easily think “well these two pieces of

information, I found them in the same place so even though one is a book and one is a, not a

journal article but just a random internet article, then they’re about the same thing, I found

them in the same spot, then they’re probably about the same value”. But you have to think about

the process that one of them had to go through. Like a book had to be written, and then edited

by the person, and then probably edited by an actual editor, and then had to be approved by a

publisher and then... (17, 1st year)

Whereas Betty and Vera identified internal processes, Dolores elaborated on the contribution

and limits of corroboration during a source evaluation:

So, what I found was very helpful was I’d find a source and read through it, figure out if it seemed

somewhat legit and then I would go to a fact checker site and I would see how they rated it and

why. And it’s obviously like fact checking a fact checking site it just, there comes a point where

you have to be like okay, I’ve done my due diligence, this [source] is as trustworthy as

something I can find can be and then I would use it as a source. (17, 1st year)

When students assess multiple sources, they must reconcile the similarities and differences

between sources, their claims, and the support presented (Barzilai & Zohar, 2012). All ages

struggle to engage in this epistemic process (Eshet-Alkalai & Chajut, 2009). Often students may

decide not to engage further with information if it contradicts their attitude and they may not

assess the validity of their own beliefs (Hart et al., 2009). Bråten and colleagues (2011)

delineated the impact of failing to integrate perspectives, explaining that a student may espouse

false beliefs from biased sources. Consistent with the literature, participants reported detecting

biases when examining multiple sources.

Students Detect Bias by Evaluating Multiple Perspectives

All five groups brought up that individual differences, such as biases, attitudes and purpose,

influence how information is both evaluated and created. Acknowledging the difficulty of

integrating multiple perspectives, Sharon offered the following example:

I try to, I always try to think of it [the topic] from like, the opposite point of view. Especially

when it comes to arguments, or like solving arguments between people. Like, “okay you might

think you’re right, but have you considered it from this point of view?,” so why the other person