scholarly journals The processing and evaluation of news content on social media is influenced by peer-user commentary

Author(s):  
Arnout B. Boot ◽  
Katinka Dijkstra ◽  
Rolf A. Zwaan

AbstractContemporary news often spreads via social media. This study investigated whether the processing and evaluation of online news content can be influenced by Likes and peer-user comments. An online experiment was designed, using a custom-built website that resembled Facebook, to explore how Likes, positive comments, negative comments, or a combination of positive and negative comments would affect the reader’s processing of news content. The results showed that especially negative comments affected the readers’ personal opinions about the news content, even in combination with other positive comments: They (1) induced more negative attitudes, (2) lowered intent to share it, (3) reduced agreement with conveyed ideas, (4) lowered perceived attitude of the general public, and (5) decreased the credibility of the content. Against expectations, the presence of Likes did not affect the readers, irrespective of the news content. An important consideration is that, while the negative comments were persuasive, they comprised subjective, emotive, and fallacious rhetoric. Finally, negativity bias, the perception of expert authority, and cognitive heuristics are discussed as potential explanations for the persuasive effect of negative comments.

2019 ◽  
Vol 32 (3) ◽  
pp. 569-585
Author(s):  
Oren Soffer ◽  
Galit Gordoni

Abstract This article examines how user comments influence assessment of public opinion climate and perceived support for one’s opinion. The effects of user-comment sentiment (positive vs. negative) and of user-comment content (with or without personal exemplification) were tested with an online experiment (n = 1,510). Results show that user-comment effects on estimates of public opinion depend mainly on the sentiment of the comments and not on their framing as opinions with or without personal exemplification. Negative comments significantly reduce readers’ estimation of public opinion support of the issue dealt with by the article and affect the perceived support of one’s opinion. Study results refer to the possible dangers in user comments deliberate manipulation in democratic public discussion.


2017 ◽  
Vol 47 (6) ◽  
pp. 815-837 ◽  
Author(s):  
Ashley Muddiman ◽  
Jamie Pond-Cobb ◽  
Jamie E. Matson

Researchers condemn the effects of news but have only recently turned their attention to determining the extent to which individuals engage with news. Within the context of online uncivil news, the current project investigates whether negativity always increases engagement with news. The results of two experiments demonstrate that civility in the news increased news engagement, especially compared to news with the most incivility. News articles that included multiple types of incivility and news articles that prompted individuals to perceive that an out-group political party was behaving uncivilly discouraged people from engaging with online news. The studies contribute theoretically to negativity bias and incivility research and signal that negativity does not always attract clicks.


2020 ◽  
Vol 8 (4) ◽  
pp. 53-62 ◽  
Author(s):  
Pere Masip ◽  
Jaume Suau ◽  
Carlos Ruiz-Caballero

Debates about post-truth need to take into account how news re-disseminates in a hybrid media system in which social networks and audience participation play a central role. Hence, there is a certain risk of reducing citizens’ exposure to politically adverse news content, creating ‘echo chambers’ of political affinity. This article presents the results of research conducted in agreement with 18 leading Spanish online news media, based on a survey (N = 6625) of their registered users. The results highlight that high levels of selective exposure that are a characteristic of offline media consumption are being moderated in the online realm. Although most of the respondents get news online from like-minded media, the figures related to those who also get news from media with a different media ideology should not be underestimated. As news consumption is becoming more ‘social,’ our research points out that Spanish citizens who are more active on social media sites are more likely to be exposed to news content from different ideological positions than those who are less active users. There is a weak association between the use of a particular social network site and gaining access to like- and non-like-minded news.


2021 ◽  
pp. 073953292110470
Author(s):  
Sherice Gearhart ◽  
Alexander Moe ◽  
Derrick Holland

News outlets rely on social media to freely distribute content, offering a venue for users to comment on news. This exposes individuals to user comments prior to reading news articles, which can influence perceptions of news content. A 2 × 2 between-subject experiment (N = 690) tested the hostile media bias theory via the influence of comments seen before viewing a news story on perceptions of bias and credibility. Results show that user comments induce hostile media perceptions.


Author(s):  
Reinert Yosua Rumagit

The more rapid development of the internet world, users can make comments on a variety of content on social networks, such as social media, blogs and others. Free users make comments triggering negative comments, making insults and incitement. By classifying user comments it is hoped that the system can be smarter to be able to distinguish threat, insult and incitement comments. The technique for classifying user comments uses deep learning, consisting of 6 classes. The results of experiments that have been conducted show that deep learning models produce an accuracy rate above 98%.


2019 ◽  
Vol 44 (4) ◽  
pp. 427-446 ◽  
Author(s):  
A. Marthe Möller ◽  
Rinaldo Kühne

Abstract Videos presented on social media platforms are frequently watched because people find them entertaining. However, videos on social media platforms are often presented together with user comments containing information about how entertaining previous viewers found them to be. This social information may affect people’s entertainment experiences. The goal of the present study was to explore how user comments affect viewers’ hedonic and eudaimonic entertainment experiences in response to online videos. The results of an online experiment (N = 203) showed that user comments in which previous viewers of a video indicate that they enjoyed or appreciated the video increase the hedonic entertainment experiences of new viewers. Viewers’ eudaimonic entertainment experiences were unaffected by user comments. These findings show that entertainment experiences do not emerge in response to online videos alone. Instead, they also depend on information about the entertainment experiences of previous viewers.


2021 ◽  
Vol 111 (3) ◽  
pp. 831-870 ◽  
Author(s):  
Ro’ee Levy

Does the consumption of ideologically congruent news on social media exacerbate polarization? I estimate the effects of social media news exposure by conducting a large field experiment randomly offering participants subscriptions to conservative or liberal news outlets on Facebook. I collect data on the causal chain of media effects: subscriptions to outlets, exposure to news on Facebook, visits to online news sites, and sharing of posts, as well as changes in political opinions and attitudes. Four main findings emerge. First, random variation in exposure to news on social media substantially affects the slant of news sites that individuals visit. Second, exposure to counter-attitudinal news decreases negative attitudes toward the opposing political party. Third, in contrast to the effect on attitudes, I find no evidence that the political leanings of news outlets affect political opinions. Fourth, Facebook’s algorithm is less likely to supply individuals with posts from counter-attitudinal outlets, conditional on individuals subscribing to them. Together, the results suggest that social media algorithms may limit exposure to counter-attitudinal news and thus increase polarization. (JEL C93, D72, L82)


2019 ◽  
Author(s):  
Shuqing Gao ◽  
Lingnan He ◽  
Yue Chen ◽  
Dan Li ◽  
Kaisheng Lai

BACKGROUND High-quality medical resources are in high demand worldwide, and the application of artificial intelligence (AI) in medical care may help alleviate the crisis related to this shortage. The development of the medical AI industry depends to a certain extent on whether industry experts have a comprehensive understanding of the public’s views on medical AI. Currently, the opinions of the general public on this matter remain unclear. OBJECTIVE The purpose of this study is to explore the public perception of AI in medical care through a content analysis of social media data, including specific topics that the public is concerned about; public attitudes toward AI in medical care and the reasons for them; and public opinion on whether AI can replace human doctors. METHODS Through an application programming interface, we collected a data set from the Sina Weibo platform comprising more than 16 million users throughout China by crawling all public posts from January to December 2017. Based on this data set, we identified 2315 posts related to AI in medical care and classified them through content analysis. RESULTS Among the 2315 identified posts, we found three types of AI topics discussed on the platform: (1) technology and application (n=987, 42.63%), (2) industry development (n=706, 30.50%), and (3) impact on society (n=622, 26.87%). Out of 956 posts where public attitudes were expressed, 59.4% (n=568), 34.4% (n=329), and 6.2% (n=59) of the posts expressed positive, neutral, and negative attitudes, respectively. The immaturity of AI technology (27/59, 46%) and a distrust of related companies (n=15, 25%) were the two main reasons for the negative attitudes. Across 200 posts that mentioned public attitudes toward replacing human doctors with AI, 47.5% (n=95) and 32.5% (n=65) of the posts expressed that AI would completely or partially replace human doctors, respectively. In comparison, 20.0% (n=40) of the posts expressed that AI would not replace human doctors. CONCLUSIONS Our findings indicate that people are most concerned about AI technology and applications. Generally, the majority of people held positive attitudes and believed that AI doctors would completely or partially replace human ones. Compared with previous studies on medical doctors, the general public has a more positive attitude toward medical AI. Lack of trust in AI and the absence of the humanistic care factor are essential reasons why some people still have a negative attitude toward medical AI. We suggest that practitioners may need to pay more attention to promoting the credibility of technology companies and meeting patients’ emotional needs instead of focusing merely on technical issues.


2020 ◽  
Vol 12 (2) ◽  
pp. 56-77
Author(s):  
Antonio Rino

A negative comment on a corporate social media post can pierce like an arrow to the chest and puncture holes into an organization’s walls. A single negative voice in a sea of positive feedback can feel as though it is blaring from a giant bullhorn, striking fear into corporate community managers that an avalanche of negativity will overtake positivity like a contagious bandwagon. Why would a corporation consider telling its story in the online battlefield of social media and risk exposing its reputation to a cesspool of negativity? This paper will explore why negativity is an online barrier through research, industry advice and best practices – including from the researchers and experts who use the foregoing colourful idioms and metaphors to describe negative online comments. To answer the main question of why an organization would consider engaging on social media in the face of prolific negativity and hate speech, this paper will review the evolution of online emotions and the rise of negativity on social media. The paper will define negative online comments in the corporate context using research on trolls, cyberbullying and online personal attacks. Using the psychology of Pareto’s 80/20 rule and negativity bias, this paper will provide quantitative and qualitative perspectives on negativity to show why companies pay much more attention to negative comments than positive ones, and how analysis of negativity can help a company develop emotional intelligence. Examples will be presented from research and industry to understand and combat negativity and review research on user comments that classifies users to better understand their motivations. Using research on tone and voice in online conversation, this paper will share cautionary case studies that demonstrate how companies that are not self-aware can incite negative comments. Finally, this paper will review research on platform content moderation techniques to understand how social media platforms like Facebook manage negativity and will suggest similar solutions for corporations, including not only the online community’s ability but also our collective responsibility to moderate and overcome the online positivity deficit.   Keywords: social media, community manager, online negativity, negativity bias, negative comments, online emotions, user categorization


10.2196/16649 ◽  
2020 ◽  
Vol 22 (7) ◽  
pp. e16649 ◽  
Author(s):  
Shuqing Gao ◽  
Lingnan He ◽  
Yue Chen ◽  
Dan Li ◽  
Kaisheng Lai

Background High-quality medical resources are in high demand worldwide, and the application of artificial intelligence (AI) in medical care may help alleviate the crisis related to this shortage. The development of the medical AI industry depends to a certain extent on whether industry experts have a comprehensive understanding of the public’s views on medical AI. Currently, the opinions of the general public on this matter remain unclear. Objective The purpose of this study is to explore the public perception of AI in medical care through a content analysis of social media data, including specific topics that the public is concerned about; public attitudes toward AI in medical care and the reasons for them; and public opinion on whether AI can replace human doctors. Methods Through an application programming interface, we collected a data set from the Sina Weibo platform comprising more than 16 million users throughout China by crawling all public posts from January to December 2017. Based on this data set, we identified 2315 posts related to AI in medical care and classified them through content analysis. Results Among the 2315 identified posts, we found three types of AI topics discussed on the platform: (1) technology and application (n=987, 42.63%), (2) industry development (n=706, 30.50%), and (3) impact on society (n=622, 26.87%). Out of 956 posts where public attitudes were expressed, 59.4% (n=568), 34.4% (n=329), and 6.2% (n=59) of the posts expressed positive, neutral, and negative attitudes, respectively. The immaturity of AI technology (27/59, 46%) and a distrust of related companies (n=15, 25%) were the two main reasons for the negative attitudes. Across 200 posts that mentioned public attitudes toward replacing human doctors with AI, 47.5% (n=95) and 32.5% (n=65) of the posts expressed that AI would completely or partially replace human doctors, respectively. In comparison, 20.0% (n=40) of the posts expressed that AI would not replace human doctors. Conclusions Our findings indicate that people are most concerned about AI technology and applications. Generally, the majority of people held positive attitudes and believed that AI doctors would completely or partially replace human ones. Compared with previous studies on medical doctors, the general public has a more positive attitude toward medical AI. Lack of trust in AI and the absence of the humanistic care factor are essential reasons why some people still have a negative attitude toward medical AI. We suggest that practitioners may need to pay more attention to promoting the credibility of technology companies and meeting patients’ emotional needs instead of focusing merely on technical issues.


Sign in / Sign up

Export Citation Format

Share Document