Journal of Online Trust and Safety
Latest Publications


TOTAL DOCUMENTS

10
(FIVE YEARS 10)

H-INDEX

0
(FIVE YEARS 0)

Published By Stanford Internet Observatory

2770-3142

2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Elena Cryst ◽  
Shelby Grossman ◽  
Jeff Hancock ◽  
Alex Stamos ◽  
David Thiel
Keyword(s):  

Introducing the Journal of Online Trust and Safety


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Nathaniel Persily

A Proposal for Researcher Access to Platform Data: The Platform Transparency and Accountability Act


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Camille François ◽  
Evelyn Douek

Intense public and regulatory pressure following revelations of Russian interference in the US 2016 election led social media platforms to develop new policies to demonstrate how they had addressed the troll-shaped blind spot in their content moderation practices. This moment also gave rise to new transparency regimes that endure to this day and have unique characteristics, notably: the release of regular public reports of enforcement measures; the provision of underlying data to external stakeholders and, sometimes, the public; and collaboration across industry and with government. Despite these positive features, platform policies and transparency regimes related to information operations remain poorly understood. Underappreciated ambiguities and inconsistencies in platforms’ work in this area create perverse incentives for enforcement and distort public understanding of information operations. Highlighting these weaknesses is important as platforms expand content moderation practices in ways that build on the methods used in this domain. As platforms expand these practices, they are not continuing to invest in their transparency regimes, and the early promise and momentum behind the creation of these pockets of transparency are being lost as public and regulatory focus turns to other areas of content moderation.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
William Godel ◽  
Zeve Sanderson ◽  
Kevin Aslett ◽  
Jonathan Nagler ◽  
Richard Bonneau ◽  
...  

Reducing the spread of false news remains a challenge for social media platforms, as the current strategy of using third-party fact- checkers lacks the capacity to address both the scale and speed of misinformation diffusion. Research on the “wisdom of the crowds” suggests one possible solution: aggregating the evaluations of ordinary users to assess the veracity of information. In this study, we investigate the effectiveness of a scalable model for real-time crowdsourced fact-checking. We select 135 popular news stories and have them evaluated by both ordinary individuals and professional fact-checkers within 72 hours of publication, producing 12,883 individual evaluations. Although we find that machine learning-based models using the crowd perform better at identifying false news than simple aggregation rules, our results suggest that neither approach is able to perform at the level of professional fact-checkers. Additionally, both methods perform best when using evaluations only from survey respondents with high political knowledge, suggesting reason for caution for crowdsourced models that rely on a representative sample of the population. Overall, our analyses reveal that while crowd-based systems provide some information on news quality, they are nonetheless limited—and have significant variation—in their ability to identify false news.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Ronald Robertson
Keyword(s):  

Call for Papers: Symposium and Special Issue - Uncommon yet Consequential Online Harms


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Karen Nershi
Keyword(s):  

Call for Papers: Symposium and Special Issue - Cryptocurrency and Societal Harm


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Jae Yeon Kim ◽  
Aniket Kesari

Donald Trump linked COVID-19 to Chinese people on March 16, 2020, by calling it the Chinese virus. Using 59,337 US tweets related to COVID-19 and anti-Asian hate, we analyzed how Trump’s anti-Asian speech altered online hate speech content. Trump increased the prevalence of both anti-Asian hate speech and counterhate speech. In addition, there is a linkage between hate speech and misinformation. Both before and after Trump’s tweet, hate speech speakers shared misinformation regarding the role of the Chinese government in the origin and spread of COVID-19. However, this tendency was amplified in the post-Trump tweet period. The literature on misinformation and hate speech has been developed in parallel, yet misinformation and hate speech are often interwoven in practice. This association may exist because biased people justify and defend their hate speech using misinformation.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Olivia Borge ◽  
Victoria Cosgrove ◽  
Elena Cryst ◽  
Shelby Grossman ◽  
Shelby Perkins ◽  
...  

The suicide contagion effect posits that exposure to suicide- related content increases the likelihood of an individual engaging in suicidal behavior. Internet suicide-related queries correlate with suicide prevalence. However, suicide-related searches also lead people to access help resources. This article systematically evaluates the results returned from both general suicide terms and terms related to specific suicide means across three popular search engines—Google, Bing, DuckDuckGo— in both English and Spanish. We find that Bing and DuckDuckGo surface harmful content more often than Google. We assess whether search engines show suicide prevention hotline information, and find that 53% of English queries have this information, compared to 13% of Spanish queries. Looking across platforms, 55% of Google queries include hotline information, compared to 35% for Bing and 10% for DuckDuckGo. Specific suicide means queries are 20% more likely to surface harmful results on Bing and DuckDuckGo compared to general suicide term queries, with no difference on Google.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Hany Farid

It is said that what happens on the internet stays on the internet, forever. In some cases this may be considered a feature. Reports of human rights violations and corporate corruption, for example, should remain part of the public record. In other cases, however, digital immortality may be considered less desirable. Most would agree that terror-related content, child sexual abuse material, non-consensual intimate imagery, and dangerous disinformation, to name a few, should not be so easily found online. Neither human moderation nor artificial intelligence is currently able to contend with the spread of harmful content. Perceptual hashing has emerged as a powerful technology to limit the redistribution of multimedia content (including audio, images, and video). We review how this technology works, its advantages and disadvantages, and how it has been deployed on small- to large-scale platforms.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Brittan Heller ◽  
Avi Bar-Zeev

The Problems with Immersive Advertising: In AR/VR, Nobody Knows You Are an Ad


Sign in / Sign up

Export Citation Format

Share Document