Grant / January 2021

Misinformation Corrections


Misinformation and disinformation campaigns often rely on bots and fake accounts to impersonate human users with similar demographic characteristics, political beliefs, and social values as their audience to establish credibility. Such nefarious efforts are successful because human beliefs and behaviors about new information are based on the identity of the messenger and conduit of transmission, not just the contents of the message. The uncorrected and unmitigated spread of misinformation is a widespread security concern – when we interpret online information, we do not have credible and standardized avenues to check the identity, credibility, and authenticity of the sources.

In this project, we investigate the social psychological aspects of social framings that can lead to affective polarization about misinformation on social media. Prior research on misinformation has focused on the role of experts or anonymous identities on social media. However, the efficacy of exactly who is delivering the corrections is relatively understudied. We focus on two social psychological concepts, social identity and group status characteristics, to decipher how different social framings are associated with informational interpretations and subsequent corrections. Our strategic experiments concretely inform who should deliver misinformation corrections. Importantly, our approach helps us understand how audiences that traditionally harbor anti-institutional and anti-establishment sentiments will receive misinformation corrections depending on the message and the identity of the messenger. Our project directly aligns with and contributes to the CLTC mission to advance our understanding of socio-technical security environments, especially as it relates to the intersection of health information and civic engagement.

Findings, Papers, and Presentations