scholarly journals Correction to: Abusive language detection from social media comments using conventional machine learning and deep learning approaches

Author(s):  
Muhammad Pervez Akhter ◽  
Zheng Jiangbin ◽  
Irfan Raza Naqvi ◽  
Mohammed AbdelMajeed ◽  
Tehseen Zia
Author(s):  
Muhammad Pervez Akhter ◽  
Zheng Jiangbin ◽  
Irfan Raza Naqvi ◽  
Mohammed AbdelMajeed ◽  
Tehseen Zia

2021 ◽  
Vol 11 (16) ◽  
pp. 7561
Author(s):  
Umair Iqbal ◽  
Johan Barthelemy ◽  
Wanqing Li ◽  
Pascal Perez

Blockage of culverts by transported debris materials is reported as the salient contributor in originating urban flash floods. Conventional hydraulic modeling approaches had no success in addressing the problem primarily because of the unavailability of peak floods hydraulic data and the highly non-linear behavior of debris at the culvert. This article explores a new dimension to investigate the issue by proposing the use of intelligent video analytics (IVA) algorithms for extracting blockage related information. The presented research aims to automate the process of manual visual blockage classification of culverts from a maintenance perspective by remotely applying deep learning models. The potential of using existing convolutional neural network (CNN) algorithms (i.e., DarkNet53, DenseNet121, InceptionResNetV2, InceptionV3, MobileNet, ResNet50, VGG16, EfficientNetB3, NASNet) is investigated over a dataset from three different sources (i.e., images of culvert openings and blockage (ICOB), visual hydrology-lab dataset (VHD), synthetic images of culverts (SIC)) to predict the blockage in a given image. Models were evaluated based on their performance on the test dataset (i.e., accuracy, loss, precision, recall, F1 score, Jaccard Index, region of convergence (ROC) curve), floating point operations per second (FLOPs) and response times to process a single test instance. Furthermore, the performance of deep learning models was benchmarked against conventional machine learning algorithms (i.e., SVM, RF, xgboost). In addition, the idea of classifying deep visual features extracted by CNN models (i.e., ResNet50, MobileNet) using conventional machine learning approaches was also implemented in this article. From the results, NASNet was reported most efficient in classifying the blockage images with the 5-fold accuracy of 85%; however, MobileNet was recommended for the hardware implementation because of its improved response time with 5-fold accuracy comparable to NASNet (i.e., 78%). Comparable performance to standard CNN models was achieved for the case where deep visual features were classified using conventional machine learning approaches. False negative (FN) instances, false positive (FP) instances and CNN layers activation suggested that background noise and oversimplified labelling criteria were two contributing factors in the degraded performance of existing CNN algorithms. A framework for partial automation of the visual blockage classification process was proposed, given that none of the existing models was able to achieve high enough accuracy to completely automate the manual process. In addition, a detection-classification pipeline with higher blockage classification accuracy (i.e., 94%) has been proposed as a potential future direction for practical implementation.


2021 ◽  
Author(s):  
Yue Wang ◽  
Ye Ni ◽  
Xutao Li ◽  
Yunming Ye

Wildfires are a serious disaster, which often cause severe damages to forests and plants. Without an early detection and suitable control action, a small wildfire could grow into a big and serious one. The problem is especially fatal at night, as firefighters in general miss the chance to detect the wildfires in the very first few hours. Low-light satellites, which take pictures at night, offer an opportunity to detect night fire timely. However, previous studies identify night fires based on threshold methods or conventional machine learning approaches, which are not robust and accurate enough. In this paper, we develop a new deep learning approach, which determines night fire locations by a pixel-level classification on low-light remote sensing image. Experimental results on VIIRS data demonstrate the superiority and effectiveness of the proposed method, which outperforms conventional threshold and machine learning approaches.


2020 ◽  
Vol 10 (11) ◽  
pp. 2532-2542
Author(s):  
Junho Ahn ◽  
Thi Kieu Khanh Ho ◽  
Jaeyong Kang ◽  
Jeonghwan Gwak

A large number of studies that use artificial intelligence (AI) methodologies to analyze medical imaging and support computer-aided diagnosis have been conducted in the biomedical engineering domain. Owing to the advances in dental diagnostic X-ray systems such as panoramic radiographs, periapical radiographs, and dental computed tomography (CT), especially, dual-energy cone beam CT (CBCT), dental image analysis now presents more opportunities to discover new results and findings. Recent researches on dental image analysis have been increasingly incorporating analytics that utilize AI methodologies that can be divided into conventional machine learning and deep learning approaches. This review first covers the theory on dual-energy CBCT and its applications in dentistry. Then, analytical methods for dental image analysis using conventional machine learning and deep learning methods are described. We conclude by discussing the issues and suggesting directions for research in future.


Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6392
Author(s):  
Lauran R. Brewster ◽  
Ali K. Ibrahim ◽  
Breanna C. DeGroot ◽  
Thomas J. Ostendorf ◽  
Hanqi Zhuang ◽  
...  

Inertial measurement unit sensors (IMU; i.e., accelerometer, gyroscope and magnetometer combinations) are frequently fitted to animals to better understand their activity patterns and energy expenditure. Capable of recording hundreds of data points a second, these sensors can quickly produce large datasets that require methods to automate behavioral classification. Here, we describe behaviors derived from a custom-built multi-sensor bio-logging tag attached to Atlantic Goliath grouper (Epinephelus itajara) within a simulated ecosystem. We then compared the performance of two commonly applied machine learning approaches (random forest and support vector machine) to a deep learning approach (convolutional neural network, or CNN) for classifying IMU data from this tag. CNNs are frequently used to recognize activities from IMU data obtained from humans but are less commonly considered for other animals. Thirteen behavioral classes were identified during ethogram development, nine of which were classified. For the conventional machine learning approaches, 187 summary statistics were extracted from the data, including time and frequency domain features. The CNN was fed absolute values obtained from fast Fourier transformations of the raw tri-axial accelerometer, gyroscope and magnetometer channels, with a frequency resolution of 512 data points. Five metrics were used to assess classifier performance; the deep learning approach performed better across all metrics (Sensitivity = 0.962; Specificity = 0.996; F1-score = 0.962; Matthew’s Correlation Coefficient = 0.959; Cohen’s Kappa = 0.833) than both conventional machine learning approaches. Generally, the random forest performed better than the support vector machine. In some instances, a conventional learning approach yielded a higher performance metric for particular classes (e.g., the random forest had a F1-score of 0.971 for backward swimming compared to 0.955 for the CNN). Deep learning approaches could potentially improve behavioral classification from IMU data, beyond that obtained from conventional machine learning methods.


Now a day Social Media like Facebook, twitter and Instagram is major Sources for people to share their emotions based on the current situations in society. By knowing the interesting patterns in it, a government/appropriate person for that situation can take good and useful decisions. Sentiment analysis is a method where people can extract the useful information from the text like the emotions (happy, sad, and neutral) of people. Much research work was been underdoing in the area of sentiment analysis. Among that work the Machine learning and Deep learning approaches plays a maximum role. Existing works on sentiment analysis is going in the English language. In this paper, proposed a novel framework that specifically designed to do sentiment analysis of the text data, that available in the telugu language. The proposed framework was integrated with the word embedding model Word2Vec, language translator and deep learning approaches like Recurrent Neural Network and Navie base algorithms to collect and analyse the sentiment in tweeter data that present in telugu language. The results shows effective in terms of accuracy, precision and specificity.


2021 ◽  
pp. 1-12
Author(s):  
Mukul Kumar ◽  
Nipun Katyal ◽  
Nersisson Ruban ◽  
Elena Lyakso ◽  
A. Mary Mekala ◽  
...  

Over the years the need for differentiating various emotions from oral communication plays an important role in emotion based studies. There have been different algorithms to classify the kinds of emotion. Although there is no measure of fidelity of the emotion under consideration, which is primarily due to the reason that most of the readily available datasets that are annotated are produced by actors and not generated in real-world scenarios. Therefore, the predicted emotion lacks an important aspect called authenticity, which is whether an emotion is actual or stimulated. In this research work, we have developed a transfer learning and style transfer based hybrid convolutional neural network algorithm to classify the emotion as well as the fidelity of the emotion. The model is trained on features extracted from a dataset that contains stimulated as well as actual utterances. We have compared the developed algorithm with conventional machine learning and deep learning techniques by few metrics like accuracy, Precision, Recall and F1 score. The developed model performs much better than the conventional machine learning and deep learning models. The research aims to dive deeper into human emotion and make a model that understands it like humans do with precision, recall, F1 score values of 0.994, 0.996, 0.995 for speech authenticity and 0.992, 0.989, 0.99 for speech emotion classification respectively.


Sensors ◽  
2021 ◽  
Vol 21 (7) ◽  
pp. 2514
Author(s):  
Tharindu Kaluarachchi ◽  
Andrew Reis ◽  
Suranga Nanayakkara

After Deep Learning (DL) regained popularity recently, the Artificial Intelligence (AI) or Machine Learning (ML) field is undergoing rapid growth concerning research and real-world application development. Deep Learning has generated complexities in algorithms, and researchers and users have raised concerns regarding the usability and adoptability of Deep Learning systems. These concerns, coupled with the increasing human-AI interactions, have created the emerging field that is Human-Centered Machine Learning (HCML). We present this review paper as an overview and analysis of existing work in HCML related to DL. Firstly, we collaborated with field domain experts to develop a working definition for HCML. Secondly, through a systematic literature review, we analyze and classify 162 publications that fall within HCML. Our classification is based on aspects including contribution type, application area, and focused human categories. Finally, we analyze the topology of the HCML landscape by identifying research gaps, highlighting conflicting interpretations, addressing current challenges, and presenting future HCML research opportunities.


Electronics ◽  
2021 ◽  
Vol 10 (14) ◽  
pp. 1694
Author(s):  
Mathew Ashik ◽  
A. Jyothish ◽  
S. Anandaram ◽  
P. Vinod ◽  
Francesco Mercaldo ◽  
...  

Malware is one of the most significant threats in today’s computing world since the number of websites distributing malware is increasing at a rapid rate. Malware analysis and prevention methods are increasingly becoming necessary for computer systems connected to the Internet. This software exploits the system’s vulnerabilities to steal valuable information without the user’s knowledge, and stealthily send it to remote servers controlled by attackers. Traditionally, anti-malware products use signatures for detecting known malware. However, the signature-based method does not scale in detecting obfuscated and packed malware. Considering that the cause of a problem is often best understood by studying the structural aspects of a program like the mnemonics, instruction opcode, API Call, etc. In this paper, we investigate the relevance of the features of unpacked malicious and benign executables like mnemonics, instruction opcodes, and API to identify a feature that classifies the executable. Prominent features are extracted using Minimum Redundancy and Maximum Relevance (mRMR) and Analysis of Variance (ANOVA). Experiments were conducted on four datasets using machine learning and deep learning approaches such as Support Vector Machine (SVM), Naïve Bayes, J48, Random Forest (RF), and XGBoost. In addition, we also evaluate the performance of the collection of deep neural networks like Deep Dense network, One-Dimensional Convolutional Neural Network (1D-CNN), and CNN-LSTM in classifying unknown samples, and we observed promising results using APIs and system calls. On combining APIs/system calls with static features, a marginal performance improvement was attained comparing models trained only on dynamic features. Moreover, to improve accuracy, we implemented our solution using distinct deep learning methods and demonstrated a fine-tuned deep neural network that resulted in an F1-score of 99.1% and 98.48% on Dataset-2 and Dataset-3, respectively.


Sign in / Sign up

Export Citation Format

Share Document