scholarly journals Fake News and Rumour Detection on Social Media

PC Mediated Communication (CMC) advances like, for example, online journals, Twitter, Reddit, Facebook and other web based life presently have such a large number of dynamic clients that they have turned into an ideal stage for news conveyance on a mass scale. Such a mass scale news conveyance framework accompanies a proviso of faulty veracity. Building up the unwavering quality of data online is a strenuous and an overwhelming test yet it is basically essential particularly amid the time-touchy circumstances, for example, genuine crises which can have destructive impact on people and society. 2016 US Presidential race is an encapsulation of the previously mentioned crisis. In a study , it was concluded that the public's engagement with phoney news through Facebook was higher than through standard sources. So as to battle the spread of malevolent and unplanned falsehood in online networking we built up a model to recognise counterfeit news. Counterfeit news recognition is a procedure of classifying news and estimating it on the continuum of veracity. Detection is done by classifying and clustering assertions made about the event followed by veracity assessment methods emerging from linguistic cue, characteristics of the people involved and network propagation dynamics..

2020 ◽  
Vol 6 (1) ◽  
pp. 71-82
Author(s):  
Ahmad Fauzi ◽  
Dewi Wulandari

Abstract: In this era of globalization, information technology is speeding up. In managing the information required good technology because the information has a greatvalue for a company. And computer technology today with its increasingly sophisticated processing speed has enabled the development of computer-based information systems. Problems that exist in Kauman Apothecary is about the data processing that is still done manually, ranging from the admission process of incoming drugs, drugs out, often the absence of matching stock between the data with the original drug, as well as in making reports that still use microsoft excel. The design of the system is described by UML modeling, drug sales information system on web-based pharmacy kauman intranet this is the best solution, can improve the quality of data processing drugs in pharmacies kauman. And with the creation of this information system, can help simplify data processing moreleverage, while keeping data safe and minimize the data kerangkapan. The design of web-based drug sales information system is made using PHP and MySQL.Keywords: Information System, Sales, Kauman PharmacyAbstrak: Dalam era globalisasi sekarang ini, teknologi informasi melaju dengan cepatnya.Dalam mengelola informasi dibutuhkan teknologi yang baik karena informasi mempunyai nilai yang besar bagi suatu perusahaan. Dan teknologi komputer sekarang ini dengan kecepatan prosesnya yang semakin canggih telah memungkinkan pengembangan sistem informasi berbasis komputer. Masalah yang ada pada Apotek Kauman yaitu mengenai pengolahan data-datanya yang masih dilakukan secara manual, mulai dari proses penerimaan obat masuk, obat keluar, sering tidak adanya kecocokan stok antara data dengan obat aslinya, serta dalam membuat laporan yang masih menggunakan microsoft excel. Perancangan sistem digambarkan dengan pemodelan UML, sistem informasi penjualan obat pada apotek kauman berbasis web intranet ini merupakan solusi yang terbaik, dapat meningkatkan kualitas pengolahan data obat di apotek kauman. Dan dengan dibuatnya sistem informasi ini, dapat membantu mempermudah pengolahan data lebih maksimal, sekaligus menjaga data tetap aman dan meminimalisir adanya kerangkapan data. Perancangan sistem informasi penjualan obat berbasis web ini dibuat menggunakan PHP dan MySQLKata Kunci: Sistem Informasi, Penjualan, Apotek Kauman.


Author(s):  
Varalakshmi Konagala ◽  
Shahana Bano

The engendering of uncertain data in ordinary access news sources, for example, news sites, web-based life channels, and online papers, have made it trying to recognize capable news sources, along these lines expanding the requirement for computational instruments ready to give into the unwavering quality of online substance. For instance, counterfeit news outlets were observed to be bound to utilize language that is abstract and enthusiastic. At the point when specialists are chipping away at building up an AI-based apparatus for identifying counterfeit news, there wasn't sufficient information to prepare their calculations; they did the main balanced thing. In this chapter, two novel datasets for the undertaking of phony news locations, covering distinctive news areas, distinguishing proof of phony substance in online news has been considered. N-gram model will distinguish phony substance consequently with an emphasis on phony audits and phony news. This was pursued by a lot of learning analyses to fabricate precise phony news identifiers and showed correctness of up to 80%.


Author(s):  
Anja S. Göritz

Online panels (OPs) are an important form of web-based data collection, as illustrated by their widespread use. In the classical sense, a panel is a longitudinal study in which the same information is collected from the same individuals at different points in time. In contrast to that, an OP has come to denote a pool of registered people who have agreed to occasionally take part in web-based studies. Thus with OPs, the traditional understanding of a panel as a longitudinal study is broadened because an OP can be employed as a sampling source for both longitudinal and cross-sectional studies. This article gives an overview of the current state of use of OPs. It discusses what OPs are, what type of OPs there are, how OPs work from a technological point of view, and what their advantages and disadvantages are. The article reviews the current body of methodological findings on doing research with OPs. Based on this evidence, recommendations are given as to how the quality of data that are collected in OPs can be augmented.


2021 ◽  
Vol 13 (17) ◽  
pp. 9925
Author(s):  
Maria Panitsa ◽  
Nikolia Iliopoulou ◽  
Emmanouil Petrakis

Citizen science can serve as a tool to address environmental and conservation issues. Ιn the framework of Erasmus+ project CS4ESD, this study focuses on promoting the importance of plants and plant species and communities’ diversity by using available web-based information because of Covid-19 limitations and concerning the case study of Olympus mountain Biosphere Reserve (Greece). A questionnaire was designed to collect the necessary information, aiming to investigate pupils’ and students’ willing to distinguish and learn more about plant species and communities and evaluate information found on the web. Pupils, students, and experts participated in this study. The results are indicative of young citizens’ ability to evaluate environmental issues. They often underestimate plant species richness, endemism, plant communities, the importance of plants, and ecosystem services. They also use environmental or plant-based websites and online available data in a significantly different way than experts. The age of the young citizens is a factor that may affect the quality of data. The essential issue of recognizing the importance of plants and plant communities and of assisting for their conservation is highlighted. Education for sustainable development is one of the most important tools that facilitates environmental knowledge and enhances awareness.


Author(s):  
Amber Chauncey Strain ◽  
Lucille M. Booker

One of the major challenges of ANLP research is the constant balancing act between the need for large samples, and the excessive time and monetary resources necessary for acquiring those samples. Amazon’s Mechanical Turk (MTurk) is a web-based data collection tool that has become a premier resource for researchers who are interested in optimizing their sample sizes and minimizing costs. Due to its supportive infrastructure, diverse participant pool, quality of data, and time and cost efficiency, MTurk seems particularly suitable for ANLP researchers who are interested in gathering large, high quality corpora in relatively short time frames. In this chapter, the authors first provide a broad description of the MTurk interface. Next, they describe the steps for acquiring IRB approval of MTurk experiments, designing experiments using the MTurk dashboard, and managing data. Finally, the chapter concludes by discussing the potential benefits and limitations of using MTurk for ANLP experimentation.


2019 ◽  
Vol 214 ◽  
pp. 01049
Author(s):  
Alexey Anisenkov ◽  
Daniil Zhadan ◽  
Ivan Logashenko

A comprehensive and efficient environment and data monitoring system is a vital part of any HEP experiment. In this paper we describe the software web-based framework which is currently used by the CMD-3 Collaboration at the VEPP-2000 Collider and partially by the Muon g-2 experiment at Fermilab to monitor the status of data acquisition and the quality of data taken by the experiments. The system is designed to meet typical requirements and cover various use-cases of DAQ applications, starting from the central configuration, slow control data monitoring, data quality monitoring, user-oriented visualization, control of the hardware and DAQ processes, etc. Being an intermediate middleware between the front-end electronics and the DAQ applications the system is focused to provide a high-level coherent view for shifters and experts for robust operations. In particular, it is used to integrate various experiment dependent monitoring modules and tools into a unified Web oriented portal with appropriate access control policy. The paper describes the design and overall architecture of the system, recent developments and the most important aspects of the framework implementation.


Author(s):  
H. Rathi ◽  
M. Biyani ◽  
M. Malik ◽  
P. Rathi

Background. On March 24, 2020, a nationwide Lockdown for 21 days was ordered by the Government of India which was then extended till May 31, 2020. Researchers have predicted lockdown is a necessary step to prevent COVID-19 spread. However, others have also stated that it could cause serious damage to the economic, mental, social, and physical well-being of the people. Objective. The aim of the study is to evaluate the impact of lockdown on the quality of life and well-being of the Indians. Methods. It is a cross sectional prospective web-based questionnaire study. A link (https://forms.gle/pX25VuahP5NxT88QA) was created. Total 426 responses were received via that link and the data was included in the statistical analysis. Results. Our study revealed that during the lockdown 61.5% of the respondents were performing physical activities lesser than before. More than half responded they had a reduced financial satisfaction. Most answers on emotional well-being and social-family wellbeing were also positive, but some responses showed disturbing too, like 22% felt anxious and nervous over half of the days. It was found in the study that physical, financial, emotional, mental, social and family wellbeing were disturbed during the lockdown and quality of life was also hampered. Conclusion. Though, may be Nationwide Lockdown was the most required action at that point of time to prevent virus spread, but our study revealed that uncertainty regarding its cure and management guidelines like lockdown and social distancing has badly affected quality of life and wellbeing of the population.


Expansion of deluding data in ordinary access news sources, for example, web-based media channels, news web journals, and online papers have made it testing to distinguish reliable news sources, hence expanding the requirement for computational apparatusesready to give bits of knowledge into the unwavering quality of online substance. In this paper, every person center around the programmed ID of phony substance in the news stories. In the first place, all of us present a dataset for the undertaking of phony news identification. All and sundry depict the pre-preparing, highlight extraction, characterization and forecast measure in detail. We've utilized Logistic Regression language handling strategies to order counterfeit news. The prepreparing capacities play out certain tasks like tokenizing, stemming and exploratory information examination like reaction variable conveyance and information quality check (for example invalid or missing qualities). Straightforward pack of-words, n-grams, TF-IDF is utilized as highlight extraction strategies. Strategic relapse model is utilized as classifier for counterfeit news identification with likelihood of truth.


Author(s):  
Shilpa Singhal

Abstract: Social media interaction such as news spreading around the network is a great source of information nowadays. From one’s perspective, its negligible exertion, straightforward access, and quick dispersing of information that lead people to look out and eat up news from internet-based life. Twitter is among the most well-known ongoing news sources that ends up a standout amongst the most dominant news spreading mediums. It is known to cause extensive harm by spreading bits of fake news among the people. Online clients are normally vulnerable and are reliable on web-based networking media as their source of information without checking the veracity of the information being spread. This research contributes to develops a system for detection of rumors about real- world events that propagate on Twitter and to design a prediction algorithm that will train the machine to predict whether the given data is information or a rumor. The work finds all the useful features of a Tweet. The dataset used is the pheme dataset of known Rumors and Non Rumors. Afterwards, we make a comparison between various known Machine learning algorithms such as Decision tree, SVM, Random Tree.


Author(s):  
Varalakshmi Konagala ◽  
Shahana Bano

The engendering of uncertain data in ordinary access news sources, for example, news sites, web-based life channels, and online papers, have made it trying to recognize capable news sources, along these lines expanding the requirement for computational instruments ready to give into the unwavering quality of online substance. For instance, counterfeit news outlets were observed to be bound to utilize language that is abstract and enthusiastic. At the point when specialists are chipping away at building up an AI-based apparatus for identifying counterfeit news, there wasn't sufficient information to prepare their calculations; they did the main balanced thing. In this chapter, two novel datasets for the undertaking of phony news locations, covering distinctive news areas, distinguishing proof of phony substance in online news has been considered. N-gram model will distinguish phony substance consequently with an emphasis on phony audits and phony news. This was pursued by a lot of learning analyses to fabricate precise phony news identifiers and showed correctness of up to 80%.


Sign in / Sign up

Export Citation Format

Share Document