scholarly journals IMPROVING SPAM EMAIL FILTERING EFFICIENCY USING BAYESIAN BACKWARD APPROACH PROJECT

Author(s):  
M. SHESHIKALA

Unethical e-mail senders bear little or no cost for mass distribution of messages, yet normal e-mail users are forced to spend time and effort in reading undesirable messages from their mailboxes. Due to the rapid increase of electronic mail (or e-mail), several people and companies found it an easy way to distribute a massive amount of undesired messages to a tremendous number of users at a very low cost. These unwanted bulk messages or junk e-mails are called spam messages .Several machine learning approaches have been applied to this problem. In this paper, we explore a new approach based on Bayesian classification that can automatically classify e-mail messages as spam or legitimate. We study its performance for various datasets.

Drones ◽  
2020 ◽  
Vol 4 (3) ◽  
pp. 45
Author(s):  
Maria Angela Musci ◽  
Luigi Mazzara ◽  
Andrea Maria Lingua

Aircraft ground de-icing operations play a critical role in flight safety. However, to handle the aircraft de-icing, a considerable quantity of de-icing fluids is commonly employed. Moreover, some pre-flight inspections are carried out with engines running; thus, a large amount of fuel is wasted, and CO2 is emitted. This implies substantial economic and environmental impacts. In this context, the European project (reference call: MANUNET III 2018, project code: MNET18/ICT-3438) called SEI (Spectral Evidence of Ice) aims to provide innovative tools to identify the ice on aircraft and improve the efficiency of the de-icing process. The project includes the design of a low-cost UAV (uncrewed aerial vehicle) platform and the development of a quasi-real-time ice detection methodology to ensure a faster and semi-automatic activity with a reduction of applied operating time and de-icing fluids. The purpose of this work, developed within the activities of the project, is defining and testing the most suitable sensor using a radiometric approach and machine learning algorithms. The adopted methodology consists of classifying ice through spectral imagery collected by two different sensors: multispectral and hyperspectral camera. Since the UAV prototype is under construction, the experimental analysis was performed with a simulation dataset acquired on the ground. The comparison among the two approaches, and their related algorithms (random forest and support vector machine) for image processing, was presented: practical results show that it is possible to identify the ice in both cases. Nonetheless, the hyperspectral camera guarantees a more reliable solution reaching a higher level of accuracy of classified iced surfaces.


Sensors ◽  
2020 ◽  
Vol 20 (23) ◽  
pp. 6959
Author(s):  
Idan Zak ◽  
Reuven Katz ◽  
Itzik Klein

Inertial navigation systems provides the platform’s position, velocity, and attitude during its operation. As a dead-reckoning system, it requires initial conditions to calculate the navigation solution. While initial position and velocity vectors are provided by external means, the initial attitude can be determined using the system’s inertial sensors in a process known as coarse alignment. When considering low-cost inertial sensors, only the initial roll and pitch angles can be determined using the accelerometers measurements. The accuracy, as well as time required for the for the coarse alignment process are critical for the navigation solution accuracy, particularly for pure-inertial scenarios, because of the navigation solution drift. In this paper, a machine learning framework for the stationary coarse alignment stage is proposed. To that end, classical machine learning approaches are used in a two-stage approach to regress the roll and pitch angles. Alignment results obtained both in simulations and field experiments, using a smartphone, shows the benefits of using the proposed approach instead of the commonly used analytical coarse alignment procedure.


2021 ◽  
Vol 7 ◽  
pp. e670
Author(s):  
Marcio Dorn ◽  
Bruno Iochins Grisci ◽  
Pedro Henrique Narloch ◽  
Bruno César Feltes ◽  
Eduardo Avila ◽  
...  

The Coronavirus pandemic caused by the novel SARS-CoV-2 has significantly impacted human health and the economy, especially in countries struggling with financial resources for medical testing and treatment, such as Brazil’s case, the third most affected country by the pandemic. In this scenario, machine learning techniques have been heavily employed to analyze different types of medical data, and aid decision making, offering a low-cost alternative. Due to the urgency to fight the pandemic, a massive amount of works are applying machine learning approaches to clinical data, including complete blood count (CBC) tests, which are among the most widely available medical tests. In this work, we review the most employed machine learning classifiers for CBC data, together with popular sampling methods to deal with the class imbalance. Additionally, we describe and critically analyze three publicly available Brazilian COVID-19 CBC datasets and evaluate the performance of eight classifiers and five sampling techniques on the selected datasets. Our work provides a panorama of which classifier and sampling methods provide the best results for different relevant metrics and discuss their impact on future analyses. The metrics and algorithms are introduced in a way to aid newcomers to the field. Finally, the panorama discussed here can significantly benefit the comparison of the results of new ML algorithms.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajshree Varma ◽  
Yugandhara Verma ◽  
Priya Vijayvargiya ◽  
Prathamesh P. Churi

PurposeThe rapid advancement of technology in online communication and fingertip access to the Internet has resulted in the expedited dissemination of fake news to engage a global audience at a low cost by news channels, freelance reporters and websites. Amid the coronavirus disease 2019 (COVID-19) pandemic, individuals are inflicted with these false and potentially harmful claims and stories, which may harm the vaccination process. Psychological studies reveal that the human ability to detect deception is only slightly better than chance; therefore, there is a growing need for serious consideration for developing automated strategies to combat fake news that traverses these platforms at an alarming rate. This paper systematically reviews the existing fake news detection technologies by exploring various machine learning and deep learning techniques pre- and post-pandemic, which has never been done before to the best of the authors’ knowledge.Design/methodology/approachThe detailed literature review on fake news detection is divided into three major parts. The authors searched papers no later than 2017 on fake news detection approaches on deep learning and machine learning. The papers were initially searched through the Google scholar platform, and they have been scrutinized for quality. The authors kept “Scopus” and “Web of Science” as quality indexing parameters. All research gaps and available databases, data pre-processing, feature extraction techniques and evaluation methods for current fake news detection technologies have been explored, illustrating them using tables, charts and trees.FindingsThe paper is dissected into two approaches, namely machine learning and deep learning, to present a better understanding and a clear objective. Next, the authors present a viewpoint on which approach is better and future research trends, issues and challenges for researchers, given the relevance and urgency of a detailed and thorough analysis of existing models. This paper also delves into fake new detection during COVID-19, and it can be inferred that research and modeling are shifting toward the use of ensemble approaches.Originality/valueThe study also identifies several novel automated web-based approaches used by researchers to assess the validity of pandemic news that have proven to be successful, although currently reported accuracy has not yet reached consistent levels in the real world.


Author(s):  
S. Priyanka ◽  
G. A. Pallavi ◽  
Nayak N. Swathi ◽  
Jawali D. Ashita

Over the past 35 years, synthetic or semi-synthetic polymers called “plastics” have been widely used across multiple fields due to their low cost, versatility, durability. Plastics have proved to be a boon to mankind. However, overuse of non- biodegradable plastics comes with its own downsides. Despite constant efforts to reuse and recycle plastics, these polymers substantially contribute towards the accumulation of debris hazardous to the environment. Plastic materials are slowly broken into fragments of micro- and nano plastics due to aging and weathering. Micro- and nano plastics were found capable of entering the food chain and hence are viewed as threats. This review paper revolves around methods used for the detection and quantification of micro- and nano plastics. Detection of micro- and nano plastics using methods like Raman spectroscopy, Infrared Spectroscopy, SERS, MALDI-TOF, and machine learning approaches are discussed here. The research efforts carried out in this article aims to further facilitate the R&D initiatives of Jozbiz Technologies.


Author(s):  
V. Akhil ◽  
G. Raghav ◽  
N. Arunachalam ◽  
D. S. Srinivas

Abstract The increase in the use of metal additive manufacturing (AM) processes in major industries like aerospace, defense, and electronics indicates the need for maintaining a tight quality control. A quick, low-cost, and reliable online surface texture measurement and verification system are required to improve its industrial adoption. In this paper, a comprehensive investigation of the surface characteristics of Ti-6Al-4V selective laser melted (SLM) parts using image texture parameters is discussed. The image texture parameters extracted from the surface images using first-order and second-order statistical methods, and measured 3D surface roughness parameters are used for characterizing the SLM surfaces. A comparative study of roughness prediction models developed using various machine learning approaches is also presented. Among the models, the Gaussian process regression (GPR) model gives an accurate prediction of roughness values with an R2 value of more than 0.9. The test data results of all models are presented.


2019 ◽  
pp. 90-111 ◽  
Author(s):  
Natalia S. Pavlova ◽  
Andrey Е. Shastitko

The article deals with the problem of determining market boundaries for antitrust law enforcement in the field of telecommunications. An empirical approach has been proposed for determining the product boundaries of the market in the area of mass distribution of messages, taking into account the comparative characteristics of the types and methods of notification (informing) of end users; the possibilities of switching from one way of informing to another, including the evolution of such opportunities under the influence of technological changes; switching between different notification methods. Based on the use of surveys of customers of sending SMS messages, it is shown that the product boundaries should include not only sending messages via SMS, but also e-mail, instant messengers, Push notifications and voice information. The paper illustrates the possibilities of applying the method of critical loss analysis to determining the boundaries of markets based on a mixture of surveys and economic modeling.


Sign in / Sign up

Export Citation Format

Share Document