scholarly journals Geospatial data and artificial intelligence technologies as innovative communication tools for quality education and lifelong learning

2020 ◽  
Vol 7 (1-2) ◽  
pp. 50-71
Author(s):  
Chidinma Henrietta Onwubere

The uniqueness of open and distance learning (ODL) lies in its wide reach to a large audience simultaneously in different locations. No better system than geospatial data and artificial intelligence technologies (GDAITs) can achieve this. Globally, the current trend is to use GDAITs to improve the quality of life and productivity. Education is important for any country’s economy as it enhances the overall life expectancy. Application of GDAITs in educational sector, through broadcast digitization, publishing technologies will record greater achievements in the standard of learning and the literacy of populations. At certain ages in life, people develop apathy towards learning, thus, they are cut off from additional education that could provide them with lifelong learning. With GDAITs, they can be reached with quality education anywhere. Students have constraints of time, space, and finance, for acquisition of study materials. GDAITs are able to create and deploy seamless applications which can collapse these constraints and improve the learning curves of learners. This study investigates the exposure of youths to GDAITs and the influence on their learning patterns. Gerbner’s cultivation theory serves as the theoretical framework. A survey of 200 undergraduate Nigerian students was conducted, using random sampling technique. Findings show that Nigerian youths are highly exposed to GDAITs. THw paper concludes that GDAITs contribute positively and negatively to development in diverse human activities. However, it is highly effective in fostering communication education and research in Nigeria. It recommended that information and communication technology should be taught at all levels of education, so that Nigerians can develop critical minds to distinguish what GDAITs can and cannot do. Media houses should continue to establish platforms to check fake news emanating from social media. Also, attention needs to be focused on media content to ensure that there are enough programmes that would enhance communication education in Nigeria, without fake news parasitism. Keywords: GDAITs, Communication education, Learning processes, Social media, Digitization

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Mateusz Szczepański ◽  
Marek Pawlicki ◽  
Rafał Kozik ◽  
Michał Choraś

AbstractThe ubiquity of social media and their deep integration in the contemporary society has granted new ways to interact, exchange information, form groups, or earn money—all on a scale never seen before. Those possibilities paired with the widespread popularity contribute to the level of impact that social media display. Unfortunately, the benefits brought by them come at a cost. Social Media can be employed by various entities to spread disinformation—so called ‘Fake News’, either to make a profit or influence the behaviour of the society. To reduce the impact and spread of Fake News, a diverse array of countermeasures were devised. These include linguistic-based approaches, which often utilise Natural Language Processing (NLP) and Deep Learning (DL). However, as the latest advancements in the Artificial Intelligence (AI) domain show, the model’s high performance is no longer enough. The explainability of the system’s decision is equally crucial in real-life scenarios. Therefore, the objective of this paper is to present a novel explainability approach in BERT-based fake news detectors. This approach does not require extensive changes to the system and can be attached as an extension for operating detectors. For this purposes, two Explainable Artificial Intelligence (xAI) techniques, Local Interpretable Model-Agnostic Explanations (LIME) and Anchors, will be used and evaluated on fake news data, i.e., short pieces of text forming tweets or headlines. This focus of this paper is on the explainability approach for fake news detectors, as the detectors themselves were part of previous works of the authors.


2021 ◽  
Vol 08 (03) ◽  
pp. 01-08
Author(s):  
Prashant Kumar Shrivastava ◽  
Mayank Sharma ◽  
Megha Kamble ◽  
Vaibhav Gore

The quick access to information on social media networks as well as its exponential rise also made it difficult to distinguish among fake information or real information. The fast dissemination by way of sharing has enhanced its falsification exponentially. It is also important for the credibility of social media networks to avoid the spread of fake information. So it is emerging research challenge to automatically check for misstatement of information through its source, content, or publisher and prevent the unauthenticated sources from spreading rumours. This paper demonstrates an artificial intelligence based approach for the identification of the false statements made by social network entities. Two variants of Deep neural networks are being applied to evalues datasets and analyse for fake news presence. The implementation setup produced maximum extent 99% classification accuracy, when dataset is tested for binary (true or false) labeling with multiple epochs.


Author(s):  
Yosra Sobeih ◽  
El Taieb EL Sadek

Modern communication means have imposed many changes on the media work in the different stages of content production, starting from gathering news, visual and editorial processing, verification and verification of the truthfulness of what was stated in it until its publication, so the changes that were stimulated by modern means and technologies and artificial intelligence tools have affected all stages of news and media production, since the beginning of the emergence of rooms. Smart news that depends on human intelligence and then machine intelligence, which has become forced to keep pace with the development in communication means, which has withdrawn in the various stages of production, and perhaps the most important of which is the process of investigation and scrutiny and the detection of false news and rumors in our current era, which has become the spread of information very quickly through the Internet and websites Social media and various media platforms


2019 ◽  
Vol 25 (4) ◽  
pp. 62-67 ◽  
Author(s):  
Feyza Altunbey Ozbay ◽  
Bilal Alatas

Deceptive content is becoming increasingly dangerous, such as fake news created by social media users. Individuals and society have been affected negatively by the spread of low-quality news on social media. The fake and real news needs to be detected to eliminate the disadvantages of social media. This paper proposes a novel approach for fake news detection (FND) problem on social media. Applying this approach, FND problem has been considered as an optimization problem for the first time and two metaheuristic algorithms, the Grey Wolf Optimization (GWO) and Salp Swarm Optimization (SSO) have been adapted to the FND problem for the first time as well. The proposed FND approach consists of three stages. The first stage is data preprocessing. The second stage is adapting GWO and SSO for construction of a novel FND model. The last stage consists of using proposed FND model for testing. The proposed approach has been evaluated using three different real-world datasets. The results have been compared with seven supervised artificial intelligence algorithms. The results show GWO algorithm has the best performance in comparison with SSO algorithm and the other artificial intelligence algorithms. GWO seems to be efficiently used for solving different types of social media problems.


2019 ◽  
Vol 11 (2) ◽  
pp. 373-392 ◽  
Author(s):  
Sam Gregory

Abstract Pessimism currently prevails around human rights globally, as well as about the impact of digital technology and social media in supporting rights. However, there have been key successes in the use of these tools for documentation and advocacy in the past decade, including greater participation, more documentation, and growth of new fields around citizen evidence and fact-finding. Governments and others antagonistic to human rights have caught up in terms of weaponizing the affordances of the internet and pushing back on rights actors. Key challenges to be grappled with are consistent with ones that have existed for a decade but are exacerbated now—how to protect and enhance safety of vulnerable people and provide agency over visibility and anonymity; how to ensure and improve trust and credibility of human rights documentation and advocacy campaigning; and how to identify and use new strategies that optimize for a climate of volume of media, declining trust in traditional sources, and active strategies of distraction and misinformation. All of these activities take place primarily within a set of platforms that are governed by commercial imperatives and attention-based algorithms, and that increasingly use unaccountable content moderation processes driven by artificial intelligence. The article argues for a pragmatic approach to harm reduction within the platforms and tools that are used by a diverse range of human rights defenders, and for a proactive engagement on ensuring that an inclusive human rights perspective is centred in responses to new challenges at a global level within a multipolar world as well as specific areas of challenge and opportunity such as fake news and authenticity, deepfakes, use of artificial intelligence to find and make sense of information, virtual reality, and how we ensure effective solidarity activism. Solutions and usages in these areas must avoid causing inadvertent as well as deliberate harms to already marginalized people.


Author(s):  
Ujwal Patil ◽  
Prof. P. M. Chouragade

<p>The technological advancements and a qualitative improvement in the field of artificial intelligence and deep learning leads to the creation of realistic-looking but phoney digital content known as deepfakes .These manipulated videos can quickly be shared via social media to spread fake news or disinformation which not only impacts those who are deceived it also harms social media sites by diminishing faith.These deepfake videos cannot be checked since there are no regulatory mechanisms in place .As a result these untrustworthy outlets will post whatever they wish causing confusion in society in some ways.Current solutions are unable to provide digital media history tracing and authentication it is essential to develop successful methods for detecting deepfake video as a result it is necessary to determine the source or origin of such deepfake footage.That’s why we are implementing blockchain techniques to trace back and determine the origin of digital media blockchain techniques helps in the effective recognition of deepfake video and calculating the trust factor of user.</p>


Sign in / Sign up

Export Citation Format

Share Document