scholarly journals MUFFLE: Multi-Modal Fake News Influence Estimator on Twitter

2022 ◽  
Vol 12 (1) ◽  
pp. 453
Author(s):  
Cheng-Lin Wu ◽  
Hsun-Ping Hsieh ◽  
Jiawei Jiang ◽  
Yi-Chieh Yang ◽  
Chris Shei ◽  
...  

To alleviate the impact of fake news on our society, predicting the popularity of fake news posts on social media is a crucial problem worthy of study. However, most related studies on fake news emphasize detection only. In this paper, we focus on the issue of fake news influence prediction, i.e., inferring how popular a fake news post might become on social platforms. To achieve our goal, we propose a comprehensive framework, MUFFLE, which captures multi-modal dynamics by encoding the representation of news-related social networks, user characteristics, and content in text. The attention mechanism developed in the model can provide explainability for social or psychological analysis. To examine the effectiveness of MUFFLE, we conducted extensive experiments on real-world datasets. The experimental results show that our proposed method outperforms both state-of-the-art methods of popularity prediction and machine-based baselines in top-k NDCG and hit rate. Through the experiments, we also analyze the feature importance for predicting fake news influence via the explainability provided by MUFFLE.

2018 ◽  
Author(s):  
Andrea Pereira ◽  
Jay Joseph Van Bavel ◽  
Elizabeth Ann Harris

Political misinformation, often called “fake news”, represents a threat to our democracies because it impedes citizens from being appropriately informed. Evidence suggests that fake news spreads more rapidly than real news—especially when it contains political content. The present article tests three competing theoretical accounts that have been proposed to explain the rise and spread of political (fake) news: (1) the ideology hypothesis— people prefer news that bolsters their values and worldviews; (2) the confirmation bias hypothesis—people prefer news that fits their pre-existing stereotypical knowledge; and (3) the political identity hypothesis—people prefer news that allows their political in-group to fulfill certain social goals. We conducted three experiments in which American participants read news that concerned behaviors perpetrated by their political in-group or out-group and measured the extent to which they believed the news (Exp. 1, Exp. 2, Exp. 3), and were willing to share the news on social media (Exp. 2 and 3). Results revealed that Democrats and Republicans were both more likely to believe news about the value-upholding behavior of their in-group or the value-undermining behavior of their out-group, supporting a political identity hypothesis. However, although belief was positively correlated with willingness to share on social media in all conditions, we also found that Republicans were more likely to believe and want to share apolitical fake new. We discuss the implications for theoretical explanations of political beliefs and application of these concepts in in polarized political system.


Metals ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 250
Author(s):  
Jiří Hájek ◽  
Zaneta Dlouha ◽  
Vojtěch Průcha

This article is a response to the state of the art in monitoring the cooling capacity of quenching oils in industrial practice. Very often, a hardening shop requires a report with data on the cooling process for a particular quenching oil. However, the interpretation of the data can be rather difficult. The main goal of our work was to compare various criteria used for evaluating quenching oils. Those of which prove essential for operation in tempering plants would then be introduced into practice. Furthermore, the article describes monitoring the changes in the properties of a quenching oil used in a hardening shop, the effects of quenching oil temperature on its cooling capacity and the impact of the water content on certain cooling parameters of selected oils. Cooling curves were measured (including cooling rates and the time to reach relevant temperatures) according to ISO 9950. The hardening power of the oil and the area below the cooling rate curve as a function of temperature (amount of heat removed in the nose region of the Continuous cooling transformation - CCT curve) were calculated. V-values based on the work of Tamura, reflecting the steel type and its CCT curve, were calculated as well. All the data were compared against the hardness and microstructure on a section through a cylinder made of EN C35 steel cooled in the particular oil. Based on the results, criteria are recommended for assessing the suitability of a quenching oil for a specific steel grade and product size. The quenching oils used in the experiment were Houghto Quench C120, Paramo TK 22, Paramo TK 46, CS Noro MO 46 and Durixol W72.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


Author(s):  
Florian Kuisat ◽  
Fernando Lasagni ◽  
Andrés Fabián Lasagni

AbstractIt is well known that the surface topography of a part can affect its mechanical performance, which is typical in additive manufacturing. In this context, we report about the surface modification of additive manufactured components made of Titanium 64 (Ti64) and Scalmalloy®, using a pulsed laser, with the aim of reducing their surface roughness. In our experiments, a nanosecond-pulsed infrared laser source with variable pulse durations between 8 and 200 ns was applied. The impact of varying a large number of parameters on the surface quality of the smoothed areas was investigated. The results demonstrated a reduction of surface roughness Sa by more than 80% for Titanium 64 and by 65% for Scalmalloy® samples. This allows to extend the applicability of additive manufactured components beyond the current state of the art and break new ground for the application in various industrial applications such as in aerospace.


Electronics ◽  
2021 ◽  
Vol 10 (12) ◽  
pp. 1407
Author(s):  
Peng Wang ◽  
Jing Zhou ◽  
Yuzhang Liu ◽  
Xingchen Zhou

Knowledge graph embedding aims to embed entities and relations into low-dimensional vector spaces. Most existing methods only focus on triple facts in knowledge graphs. In addition, models based on translation or distance measurement cannot fully represent complex relations. As well-constructed prior knowledge, entity types can be employed to learn the representations of entities and relations. In this paper, we propose a novel knowledge graph embedding model named TransET, which takes advantage of entity types to learn more semantic features. More specifically, circle convolution based on the embeddings of entity and entity types is utilized to map head entity and tail entity to type-specific representations, then translation-based score function is used to learn the presentation triples. We evaluated our model on real-world datasets with two benchmark tasks of link prediction and triple classification. Experimental results demonstrate that it outperforms state-of-the-art models in most cases.


BMJ Open ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. e040749
Author(s):  
Shanthi Ann Ramanathan ◽  
Sarah Larkins ◽  
Karen Carlisle ◽  
Nalita Turner ◽  
Ross Stewart Bailie ◽  
...  

ObjectivesTo (1) apply the Framework to Assess the Impact from Translational health research (FAIT) to Lessons from the Best to Better the Rest (LFTB), (2) report on impacts from LFTB and (3) assess the feasibility and outcomes from a retrospective application of FAIT.SettingThree Indigenous primary healthcare (PHC) centres in the Northern Territory, Australia; project coordinating centre distributed between Townsville, Darwin and Cairns and the broader LFTB learning community across Australia.ParticipantsLFTB research team and one representative from each PHC centre.Primary and secondary outcome measuresImpact reported as (1) quantitative metrics within domains of benefit using a modified Payback Framework, (2) a cost-consequence analysis given a return on investment was not appropriate and (3) a narrative incorporating qualitative evidence of impact. Data were gathered through in-depth stakeholder interviews and a review of project documentation, outputs and relevant websites.ResultsLFTB contributed to knowledge advancement in Indigenous PHC service delivery; enhanced existing capacity of health centre staff, researchers and health service users; enhanced supportive networks for quality improvement; and used a strengths-based approach highly valued by health centres. LFTB also leveraged between $A1.4 and $A1.6 million for the subsequent Leveraging Effective Ambulatory Practice (LEAP) Project to apply LFTB learnings to resource development and creation of a learning community to empower striving PHC centres.ConclusionRetrospective application of FAIT to LFTB, although not ideal, was feasible. Prospective application would have allowed Indigenous community perspectives to be included. Greater appreciation of the full benefit of LFTB including a measure of return on investment will be possible when LEAP is complete. Future assessments of impact need to account for the limitations of fully capturing impact when intermediate/final impacts have not yet been realised and captured.


2021 ◽  
Vol 17 (1) ◽  
pp. 107-113
Author(s):  
Chantal Mak

While private corporations have become increasingly influential in the global economy, a comprehensive legal framework for their activities is missing. Although international and regional legal instruments may govern some aspects of, for instance, international investments and the supply of goods and services, there is no overarching structure for assessing the impact of large-scale private projects. In the absence of such a comprehensive framework, specific rules of private law allow profit-seeking companies to expand their activities on an economic basis, mostly without having to heed social concerns (Pistor, 2019). This is particularly problematic insofar as multinational companies have obtained power to set the rules for their engagement with states, organisations and individuals, for instance in the form of transnational investment contracts. Given the fragmented nature of the legal sphere in which such contracts are elaborated and performed, those who face the harmful consequences of such investments may not be able to participate in decision-making processes. The contracts remain in ‘wild zones’ of globalisation (Fraser, 2014, p. 150), where powerful private companies rule.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-32
Author(s):  
Quang-huy Duong ◽  
Heri Ramampiaro ◽  
Kjetil Nørvåg ◽  
Thu-lan Dam

Dense subregion (subgraph & subtensor) detection is a well-studied area, with a wide range of applications, and numerous efficient approaches and algorithms have been proposed. Approximation approaches are commonly used for detecting dense subregions due to the complexity of the exact methods. Existing algorithms are generally efficient for dense subtensor and subgraph detection, and can perform well in many applications. However, most of the existing works utilize the state-or-the-art greedy 2-approximation algorithm to capably provide solutions with a loose theoretical density guarantee. The main drawback of most of these algorithms is that they can estimate only one subtensor, or subgraph, at a time, with a low guarantee on its density. While some methods can, on the other hand, estimate multiple subtensors, they can give a guarantee on the density with respect to the input tensor for the first estimated subsensor only. We address these drawbacks by providing both theoretical and practical solution for estimating multiple dense subtensors in tensor data and giving a higher lower bound of the density. In particular, we guarantee and prove a higher bound of the lower-bound density of the estimated subgraph and subtensors. We also propose a novel approach to show that there are multiple dense subtensors with a guarantee on its density that is greater than the lower bound used in the state-of-the-art algorithms. We evaluate our approach with extensive experiments on several real-world datasets, which demonstrates its efficiency and feasibility.


Logistics ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 8
Author(s):  
Hicham Lamzaouek ◽  
Hicham Drissi ◽  
Naima El Haoud

The bullwhip effect is a pervasive phenomenon in all supply chains causing excessive inventory, delivery delays, deterioration of customer service, and high costs. Some researchers have studied this phenomenon from a financial perspective by shedding light on the phenomenon of cash flow bullwhip (CFB). The objective of this article is to provide the state of the art in relation to research work on CFB. Our ambition is not to make an exhaustive list, but to synthesize the main contributions, to enable us to identify other interesting research perspectives. In this regard, certain lines of research remain insufficiently explored, such as the role that supply chain digitization could play in controlling CFB, the impact of CFB on the profitability of companies, or the impacts of the omnichannel commerce on CFB.


Sign in / Sign up

Export Citation Format

Share Document