Comm Horizons @ UCD 2024: Information Integrity

Man with glasses. Ones and zeros, like binary code, are projected onto his face.

Comm Horizons @UCD 2024

We are excited to announce the first Communication Horizons Conference at UC Davis. This year's theme is "Navigating the Complexities of Information Integrity in a Digital Age". As the digital era evolves, our understanding of information integrity becomes increasingly multifaceted, necessitating a pioneering interdisciplinary approach. This conference is dedicated to exploring the myriad challenges and opportunities that arise as we navigate the intricate landscape of digital information.


Details

  • Dates: May 3 - 5, 2024
  • Location: Department of Communication, University of California Davis
  • Questions: email horizonconf@ucdavis.edu
  • Registration is free. Travel funding is not provided.

Agenda*

Friday May 3, 2024

Teaching & Learning Center: Room #1215 (Google Maps Directions)

Saturday May 4, 2024

Teaching & Learning Center: Room# 1218  (Google Maps Directions)

  • 8:30am – 9:00am: Catered Breakfast 
  • 9:00am – 9:15am: Welcome & Introduction – Professor Bo Feng, Chair
  • 9:15am – 10:30am: Panel One – Misinformation, Deception, and Correction
  • 10:30am – 10:45am: Break     
  • 10:45am – 12:00pm: Panel Two – Misinformation Cues, Dynamics, and Effects
  • 12:00pm – 12:45pm: Catered Lunch
  • 12:45pm – 1:45pm: Keynote – Tanu Mitra (University of Washington)
  • 1:45pm – 2:00pm: Break
  • 2:00pm – 3:15pm: Panel Three – Emotional Influences on Message Selection, Production, and Diffusion
  • 3:15pm – 3:30pm: Break
  • 3:30pm – 4:45pm: Panel Four – Gatekeeping and Gatekeepers – Parents, Media Organizations, and Governments
  • 4:45pm – 5:00pm: Break
  • 5:00pm – 6:15pm: Panel Five – Artificial Intelligence – Helps, Harms, and Methodological Advances
  • 6:15pm – 6:20pm: Closing Remarks – Richard Huskey
  • 6:30pm: No-Host Reception – Ruhstaller Farm, 6686 Sievers Rd, Dixon, CA 95620

Sunday May 5, 2024

Wine Tasting in Napa (Optional) 

  • 9:00am: Depart UC Davis
  • 10:30am: Wine Tasting + Cheese & Charcuterie, Ashes & Diamonds - $97/person (Includes tax + gratuity). Payment deadline is EOD April 28 - see below.
  • 12:30pm: Lunch, Oxbow Market & Downtown Napa
  • 3:00pm: Depart for UC Davis
  • 4:30pm: Arrive at UC Davis

*Keynote details and presentation agenda are shown at the bottom of this page. 


Travel Information

Airports

  • Sacramento International Airport (SMF): 20 Miles from campus
  • Oakland International Airport: 75 Miles from campus
  • San Francisco International Airport (SFO): 80 Miles from campus (we strongly advise against this option)

Airport Transportation
The Davis Airporter shuttle offers transportation from SMF ($30 each way) and SFO ($105 each way).

Hotels
We have secured a room block at the Aggie Inn. Please follow this link to book a room. The nightly rate is $199 + tax.

There are many hotel options in downtown Davis that are a short walk from campus. If you are looking for other recommendations, the Hyatt Place UC Davis is comfortable and well-located.


Location

Conference Location: UC Davis Teaching and Learning Complex

The conference will be held in the UC Davis Teaching and Learning Complex (Google Maps Directions).

  • Friday: Room #1215
  • Saturday: Room #1218

Parking

The closest parking lots are Visitor Lot 40 (Google Maps Directions) and the Pavilion Structure (Google Maps Directions). Payment is required on Friday. You must download and use the Aggie Park App to pay for parking. Parking is free on Saturday. 

We will provide transportation to/from receptions and Napa if you need it.


Presentation Guidelines

Plan for an 8 minute presentation + Q&A. You will be able to connect a laptop with a standard HDMI cable (please bring an HDMI adapter if necessary).


Wine Tasting Payment

To pay for wine tasting, please send $97 to Richard Huskey using one of these methods:

IMPORTANT: If your payment is not received by EOD April 28 you will not be able to attend the wine tasting trip. This is a firm deadline. Please email Richard Huskey (rwhuskey@ucdavis.edu) if you have any concerns.


Conference Organizers

This conference is organized by Drs. Richard HuskeyHeather Jane Hether, and Soojong Kim. Support for the conference itself is generously provided by the Department of Communication at UC Davis.


Keynote Speakers

Headshot of Dr. Stuart Soroka

Mohrman Lecture: Stuart Soroka (University of California, Los Angeles)

Title: Bad News! Misinformation, Misperceptions & Negativity

Abstract: Recent concerns about misinformation tend to focus on salient moments of media failure. This is for good reason: inaccurate news about vaccines, global warming, Pizzagate, or the Big Lie can be detrimental to citizens’ understanding of the world in which they live. In some ways, these instances of misinformation are relatively new. The problem of misinformation is not new, however; and the fact that important sources of information can be inaccurate is well established. This talk accordingly highlights the advantages of broadening our conception of misinformation to incorporate both recent and longstanding inaccuracies in mass and social media content. It does so by focusing on two recent papers dealing with a well-established source of bias in news production and consumption: negativity. Research with Christopher Wlezien illustrates the connection between inaccuracy/misinformation and negativity biases in US television news. Research with Seonhye Noh uses trace data from a news aggregator to explore the degree to which negativity-bias-induced misinformation is fueled not just by the choices of news outlets, but the choices (and preferences) of news consumers. Both projects highlight the overlap between ‘new’ and ‘old’ forms of misinformation. They also make clear that, even as exposure to some forms of misinformation may be limited, most news consumers engage with other forms of misinformation on a regular basis.

Bio: Stuart Soroka is Professor in the Departments of Communication and Political Science at the University of California, Los Angeles. His research focuses on political communication, political psychology, and the relationships between public policy, public opinion, and mass media. He has been particularly interested in negativity (and positivity) in news coverage, and the role of mass media in representative democracy.

 

Head shot of Dr. Tanu Mitra

Communication Horizons Keynote: Tanu Mitra (University of Washington)

Title: Multidisciplinary Approaches for Understanding and Combating Problematic Online Information

Abstract: Today, online social systems have become integral to our daily lives. Yet, these systems and the algorithms driving them surface problematic content, whether they be harmful misinformation, damaging conspiracy theories or hard to escape filter bubbles. Left unchecked, these problems disrupt the integrity of our information ecosystem and can negatively impact our democracy.

As a social computing researcher my work introduces computational methods and systems to tackle some of these issues. In this two-part talk, I will first present scalable computational methods to understand the characteristics of certain types of problematic content. For example, characteristics of conspiratorial discussions or what makes people join and abandon conspiratorial communities. In the second part of the talk, I will present systems built by my group to help counter problematic information as well as infrastructures built to systematically audit algorithms driving problematic content. For example, systems like NudgeCred can nudge users towards better information credibility assessment, or OtherTube that can help users break-free from platform-enforced personalized information filter bubbles or NewsComp that allows users to critically engage and read news articles from multiple disparate sources. Finally, I will close by previewing important new opportunities I envision tackling in the next several years for creating a more transparent, responsive, and participatory democratic environment.

Bio: Tanu Mitra is an Assistant Professor at the University of Washington, Information School, where she leads the Social Computing and ALgorithmic Experiences (SCALE) lab group. She and her students study and build large-scale social computing systems to understand and counter problematic information online. Her research spans auditing online systems for misinformation and conspiratorial content, understanding digital misinformation, unraveling narratives of online extremism and hate, and building technology and designing systems to foster critical thinking online. Her work employs a range of interdisciplinary methods from the fields of human computer interaction, data mining, machine learning, and natural language processing.

Dr. Mitra’s work has been supported by grants from the NSF, DoD, Social Science One, and other Foundations. Her research has been recognized through multiple awards and honors, including an NSF-CRII, an early career ONR-YIP, Adamic-Glance Distinguished Young Researcher award and Virginia Tech College of Engineering Outstanding New Assistant Professor award, along with several best paper honorable mention awards. Dr. Mitra received her PhD in Computer Science from Georgia Tech’s School of Interactive Computing and her Masters in Computer Science from Texas A&M University.


Detailed Panel Agenda

Panel One: Misinformation, Deception, and Correction 

  • Lene Aarøe, Miceal Canavan, Julian Christensen. Do The Facts Matter? The Impact of Statistical Evidence and Single Exemplars on Policy Opinions Among Citizens and Politicians in Digital Democracies
  • Sijia Qian, Cuihua Shen, Jingwen Zhang, Magdalena Wojcieszak. Combating Out-of-Context Visual Misinformation: Impact of Incentive-Based Strategies in Digital Media Literacy Interventions
  • Jiaojiao Ji, Xingling Qin, & Christopher Calabrese. Addressing Misinformation on Weibo: The Role of Corrections, Awareness Prompts, and Legal Warnings Across Different Misinformation Types
  • Rongwei Tang, Leticia Bode, Emily K. Vraga. The Role of Conspiracy Ideation and Message Credibility in Influencing the Effects of Correction and Its Source on Reducing Misperceptions
  • Ross Dahlke, Jeffrey T. Hancock. The Effect of Online Misinformation Exposure on False Election Beliefs
  • Je Hoon Chae, Tim Groeling. The Impact of Partisan and Elite Cues on Fact-Check Credibility

Panel Two: Misinformation Cues, Dynamics, and Effects

  • Joshua Ashkinaze, Eric Gilbert, Ceren Budak. The Dynamics of (Not) Unfollowing Misinformation Spreaders 
  • Angela Y. Lee, Ryan C. Moore, & Jeffrey T. Hancock. Building resilience to misinformation in communities of color: Results from two studies of tailored digital media literacy interventions 
  • Sapna Suresh. Ruining The Story: A Model For Suppressing Engagement With Misinformation 
  • Narine S. Yegiyan, Haoning Xue, Jingwen Zhang. Exploring the role of the fact-checking label in misinformation detection: why it is limited and short-lived
  • Yilang Peng, Sijia Qian, Yingdan Lu, Cuihua Shen. Understanding the Role of Visual Features in Credibility Perceptions of Social Media Posts
  • Michael S. Cohen, Jean Decety, Joseph W. Kable. Inoculation interventions counter false accusations against novel mock political candidates

Panel Three: Emotional Influences on Message Selection, Production, and Diffusion 

  • Seth Frey. The rippling dynamics of valenced messages in naturalistic youth chat 
  • Xuanjun Gong, Ezgi Ulusoy, Elizabeth Riggs, Rachael Kee, Jason Coronel, Allison Eden, Amber Boydstun, Richard Huskey. Preferential Evidence Accumulation Governs News Selection: A Drift Diffusion Modeling Study 
  • Magdalena Wojcieszak, Muhammad Haroon. User versus algorithm: What Drives ideologically like-minded and problematic video exposure on YouTube?
  • Emily McKinley, Muhammad Ehab Rasul, and Sijia Qian. How Humor Shapes COVID-19 Vaccine Misinformation: Unveiling Prevalence, Characteristics, and Audience Perception 
  • Graham Dixon, Samuel Bashian, Katie Snelling. Overcoming the Silencing Effect of a Minority View-Dominant Information Environment: The Role of Self-Affirmation 
  • Camille J. Saucier, Nathan Walter. Leveraging Motivation to Curb Misinformation: Can Self-Affirmation Explain the Adoption of Online Conspiracy Theories?

Panel Four: Gatekeeping and Gatekeepers – Parents, Media Organizations, and Governments 

  • Tim Levine. Truth-default Theory, The Social Science of Human Deception, and Navigating Information Integrity
  • Allyson L. Snyder, Drew P. Cingel, & Alexis Patterson Williams. U.S. Parents’ Scientific Literacy and Efficacy: Associations with Children's STEM Media Engagement 
  • Rachel Berwald. The Power of Pre-existing Beliefs: Impacts of Misinformation on Public Trust in Brazil’s 2022 Presidential Election 
  • Hans W. A. Hanley, Yingdan Lu, Jennifer Pan. Narratives of Foreign Media Ecosystems in Chinese Social Media Discussions of the Russo-Ukrainian War 
  • Yiqi Li, Lu Xiao. Straddling the True and Untrue: The Patterns and Co-Evolution of Network Brokers and Morality Expressions 
  • Xudong Yu, Magdalena Wojcieszak. Attacking the out-party much more than praising the in-party: A systematic analysis of 50 U.S. partisan media from 2010 to 2020 

Panel Five: Artificial Intelligence – Helps, Harms, and Methodological Advances 

  • Lisa Jihyun Hwang, Rachel Elizabeth McKenzie, Bo Feng. A Comparison of Human and ChatGPT Support Provision and the Influence of Support Seekers’ Self-disclosure Level on Support Quality 
  • Li Qi, Miriam Metzger, Laurent Wang, and Xingyu Liu. Facts About Fact-Checkers: Comparing Credibility Perceptions, Usage, and Sharing of Different Fact-Checking Sources.
  • Haoning Xue, Jingwen Zhang, Cuihua Shen, Magdalena Wojcieszak. The Majority of Fact-checking Labels are Intense and This Decreases Engagement Intention 
  • Claire Wonjeong Jo, Miki Wesolowska , Magdalena Wojcieszak. GPT-4-Vision vs. Human:Who Identifies “Harmful” YouTube Videos More Accurately? 
  • Jennifer Krebsbach. How might someone’s politics show in their online rhetoric? Theory and evidence of employed identity markers 
  • Kai-Cheng Yang, Filippo Menczer. Anatomy of AI-powered malicious social bots 

 

Logo for the UC Davis Department of Communication