scholarly journals DISINFORMATION DECONSTRUCTEDCOGNITION SECURITY AND DIGITAL CONTROL

2021 ◽  
Vol 14(63) (1) ◽  
pp. 122-136
Author(s):  
Maria Magdalena POPESCU ◽  

Fake News and Deepfakes have lately been highlighted in informative videos, research papers and literature reviews as tools for disinformation, along with filter bubble and echo chamber, polarization and mistrust. To counteract the unconventional weapons of word and imagery, a new research area has been defined as cognition security, a transdisciplinary area to understand the threats hybrid wars currently make use of and to determine the proper measures against non-kinetic offensives. For this, data mining and deep analysis are performed with digital instruments in a cognitive security system. Defined by all these, the present paper deconstructs the terms in an experimental monitoring of the media, to connect the realm of Cognition Security to its instruments in Cognitive Security Key words: Fake news, deepfake, cognitive security, narrat

2014 ◽  
Vol 23 (05) ◽  
pp. 1450004 ◽  
Author(s):  
Ibrahim S. Alwatban ◽  
Ahmed Z. Emam

In recent years, a new research area known as privacy preserving data mining (PPDM) has emerged and captured the attention of many researchers interested in preventing the privacy violations that may occur during data mining. In this paper, we provide a review of studies on PPDM in the context of association rules (PPARM). This paper systematically defines the scope of this survey and determines the PPARM models. The problems of each model are formally described, and we discuss the relevant approaches, techniques and algorithms that have been proposed in the literature. A profile of each model and the accompanying algorithms are provided with a comparison of the PPARM models.


2021 ◽  
Vol 7 (2) ◽  
pp. 59-79
Author(s):  
Gabriela Almeida Marcon Nora ◽  
Leonardo Ensslin ◽  
Ademar Dutra ◽  
Vinícius Dezem

This paper aims to identify the international literature approaches regarding the subject of performance evaluation of the public sector. Within a qualitative approach, this research applies the ProKnow-C method to select a bibliographic portfolio (BP). It was uncovered a theoretical framework that discloses the evolution of performance evaluation in the public sector and then this paper reports, specifically, the steps of the review, which also contributes, as a guide, to the improvement of scientific literature reviews in general. In this manuscript, 39 research papers were selected out of a first search that resulted in 2228 papers. Basic and advanced bibliometric analysis were performed to identify some particularities of the research area like authors, most quoted papers and journals, besides the specific concerns of performance evaluation in the public sector, such as the need for performance appraisal fostering organizational strategy.


2009 ◽  
pp. 2268-2274
Author(s):  
Vassilios S. Verykios

The enormous expansion of data collection and storage facilities has created an unprecedented increase in the need for data analysis and processing power. Data mining has long been the catalyst for automated and sophisticated data analysis and interrogation. Recent advances in data mining and knowledge discovery have generated controversial impact in both scientific and technological arenas. On the one hand, data mining is capable of analyzing vast amounts of information within a minimum amount of time, an analysis that has exceeded the expectations of even the most imaginative scientists of the last decade. On the other hand, the excessive processing power of intelligent algorithms which is brought with this new research area puts at risk sensitive and confidential information that resides in large and distributed data stores.


Author(s):  
Vassilios S. Verykios

The enormous expansion of data collection and storage facilities has created an unprecedented increase in the need for data analysis and processing power. Data mining has long been the catalyst for automated and sophisticated data analysis and interrogation. Recent advances in data mining and knowledge discovery have generated controversial impact in both scientific and technological arenas. On the one hand, data mining is capable of analyzing vast amounts of information within a minimum amount of time, an analysis that has exceeded the expectations of even the most imaginative scientists of the last decade. On the other hand, the excessive processing power of intelligent algorithms which is brought with this new research area puts at risk sensitive and confidential information that resides in large and distributed data stores. Privacy and security risks arising from the use of data mining techniques have been first investigated in an early paper by O’ Leary (1991). Clifton & Marks (1996) were the first to propose possible remedies to the protection of sensitive data and sensitive knowledge from the use of data mining. In particular, they suggested a variety of ways like the use of controlled access to the data, fuzzification of the data, elimination of unnecessary groupings in the data, data augmentation, as well as data auditing. A subsequent paper by Clifton (2000) made concrete early results in the area by demonstrating an interesting approach for privacy protection that relies on sampling. A main result of Clifton’s paper was to show how to determine the right sample size of the public data (data to be disclosed to the public where sensitive information has been trimmed off), by estimating at the same time the error that is introduced from the sampling to the significance of the rules. Agrawal and Srikant (2000) were the first to establish a new research area, the privacy preserving data mining, which had as its goal to consider privacy and confidentiality issues originating in the mining of the data. The authors proposed an approach known as data perturbation that relies on disclosing a modified database with noisy data instead of the original database. The modified database could produce very similar patterns with those of the original database.


2019 ◽  
Vol 10 (4) ◽  
pp. 18-37
Author(s):  
Farid Bourennani

Nowadays, we have access to unprecedented quantities of data composed of heterogeneous data types (HDT). Heterogeneous data mining (HDM) is a new research area that focuses on the processing of HDT. Usually, input data is transformed into an algebraic model before data processing. However, how to combine the representations of HDT into a single model for a unified processing of big data is an open question. In this article, the authors attempt to find answers to this question by solving a data integration (DI) problem which involves the processing of seven HDT. They propose to solve the DI problem by combining multi-objective optimization and self-organizing maps to find optimal parameters settings for most accurate HDM results. The preliminary results are promising, and a post processing algorithm is proposed which makes the DI operations much simpler and more accurate.


2016 ◽  
Vol 16 (5) ◽  
pp. 69-77 ◽  
Author(s):  
Wenquan Yi ◽  
Fei Teng ◽  
Jianfeng Xu

Abstract Stream data mining has been a hot topic for research in the data mining research area in recent years, as it has an extensive application prospect in big data ages. Research on stream data mining mainly focuses on frequent item sets mining, clustering and classification. However, traditional steam data mining methods are not effective enough for handling high dimensional data set because these methods are not fit for the characteristics of stream data. So, these traditional stream data mining methods need to be enhanced for big data applications. To resolve this issue, a hybrid framework is proposed for big steam data mining. In this framework, online and offline model are organized for different tasks, the interior of each model is rationally organized according to different mining tasks. This framework provides a new research idea and macro perspective for stream data mining under the background of big data.


Author(s):  
Τιμολέων Νικόλαος Θεοφανέλλης ◽  
Μαρία Αντίκα
Keyword(s):  

Ο ερχομός του διαδικτύου στην ενημέρωση αναμενόταν να φέρει πολυφωνία και αντικειμενικότητα. Ωστόσο, φαινόμενα όπως οι ψευδής ειδήσεις (fake news), ο αντίλαλος (echo chamber) και η φιλτραρισμένη φούσκα (filter bubble) αξιοποιώντας ανθρώπινες αδυναμίες απλά ενίσχυσαν τα δεδομένα που υπήρχαν πριν την έλευση του διαδικτύου. Τα φαινόμενα αυτά ενισχύονται μέσα από αλγορίθμους που χρησιμοποιούν ιστότοποι για να παρέχουν δωρεάν υπηρεσίες στους χρήστες, ενώ παράλληλα έχουν κέρδος από την προσέλκυση και την παραμονή των χρηστών σε αυτές. Παρόλα αυτά στο διαδίκτυο υπάρχει πολυφωνία, αλλά για να φτάσουμε σε αυτή χρειάζεται να αναγνωρίσουμε αυτά τα φαινόμενα, να ευαισθητοποιηθούμε απέναντι τους και να εκπαιδευτούμε ώστε να καταφέρουμε να φτάσουμε στην αντικειμενική και πολυφωνική ενημέρωση. Είναι απαραίτητα να είμαστε κριτικοί σε αυτά που ακούμε και διαβάζουμε προκειμένου να είμαστε σε θέση να ασκήσουμε μεγαλύτερο έλεγχο στο μέλλον μας σε σχέση με τους ανθρώπους που δεν διαθέτουν αυτή τη γνώση.


2019 ◽  
Vol 8 (1) ◽  
pp. 114-133

Since the 2016 U.S. presidential election, attacks on the media have been relentless. “Fake news” has become a household term, and repeated attempts to break the trust between reporters and the American people have threatened the validity of the First Amendment to the U.S. Constitution. In this article, the authors trace the development of fake news and its impact on contemporary political discourse. They also outline cutting-edge pedagogies designed to assist students in critically evaluating the veracity of various news sources and social media sites.


2021 ◽  
Vol 7 ◽  
pp. 237802312110247
Author(s):  
Alexandrea J. Ravenelle ◽  
Abigail Newell ◽  
Ken Cai Kowalski

The authors explore media distrust among a sample of precarious and gig workers interviewed during the COVID-19 pandemic. Although these left-leaning respondents initially increased their media consumption at the outset of the pandemic, they soon complained of media sensationalism and repurposed a readily available cultural tool: claims of “fake news.” As a result, these unsettled times have resulted in a “diffusion of distrust,” in which an elite conservative discourse of skepticism toward the media has also become a popular form of compensatory control among self-identified liberals. Perceiving “fake news” and media sensationalism as “not good” for their mental health, respondents also reported experiencing media burnout and withdrawing from media consumption. As the pandemic passes its one-year anniversary, this research has implications for long-term media coverage on COVID-19 and ongoing media trust and consumption.


Sign in / Sign up

Export Citation Format

Share Document