scholarly journals Invasive Weed Optimization Based Ransom-Ware Detection in Cloud Environment

Author(s):  
Adil Hussain Mohammed

Cloud provide support to manage, control, monitor different organization. Due to flexible nature f cloud chance of attack on it increases by means of some software attack in form of ransomware. Many of researcher has proposed various model to prevent such attacks or to identify such activities. This paper has proposed a ransomware detection model by use of trained neural network. Training of neural network was done by filter or optimized feature set obtained from the feature reduction algorithm. Paper has proposed a Invasive Weed Optimization algorithm that filter good set of feature from the available input training dataset. Proposed model test was performed on real dataset, have set sessions related to cloud ransomware attacks. Result shows that proposed model has increase the comparing parameter values.

Author(s):  
Bashra Kadhim Oleiwi Chabor Alwawi ◽  
Layla H. Abood

The coronavirus disease-2019 (COVID-19) is spreading quickly and globally as a pandemic and is the biggest problem facing humanity nowadays. The medical resources have become insufficient in many areas. The importance of the fast diagnosis of the positive cases is increasing to prevent further spread of this pandemic. In this study, the deep learning technology for COVID-19 dataset expansion and detection model is proposed. In the first stage of proposed model, COVID-19 dataset as chest X-ray images were collected and pre-processed, followed by expanding the data using data augmentation, enhancement by image processing and histogram equalization techniuque. While in the second stage of this model, a new convolution neural network (CNN) architecture was built and trained to diagnose the COVID-19 dataset as a COVID-19 (infected) or normal (uninfected) case. Whereas, a graphical user interface (GUI) using with Tkinter was designed for the proposed COVID-19 detection model. Training simulations are carried out online on using Google colaboratory based graphics prossesing unit (GPU). The proposed model has successfully classified COVID-19 with accuracy of the training model is 93.8% for training dataset and 92.1% for validating dataset and reached to the targeted point with minimum epoch’s number to train this model with satisfying results.


Author(s):  
Eslam Mohammed Abdelkader ◽  
Osama Moselhi ◽  
Mohamed Marzouk ◽  
Tarek Zayed

Existing bridges are aging and deteriorating, raising concerns for public safety and the preservation of these valuable assets. Furthermore, the transportation networks that manage many bridges face budgetary constraints. This state of affairs necessitates the development of a computer vision-based method to alleviate shortcomings in visual inspection-based methods. In this context, the present study proposes a three-tier method for the automated detection and recognition of bridge defects. In the first tier, singular value decomposition ([Formula: see text]) is adopted to formulate the feature vector set through mapping the most dominant spatial domain features in images. The second tier encompasses a hybridization of the Elman neural network ([Formula: see text]) and the invasive weed optimization (I[Formula: see text]) algorithm to enhance the prediction performance of the ENN. This is accomplished by designing a variable optimization mechanism that aims at searching for the optimum exploration–exploitation trade-off in the neural network. The third tier involves validation through comparisons against a set of conventional machine-learning and deep-learning models capitalizing on performance prediction and statistical significance tests. A computerized platform was programmed in C#.net to facilitate implementation by the users. It was found that the method developed outperformed other prediction models achieving overall accuracy, F-measure, Kappa coefficient, balanced accuracy, Matthews’s correlation coefficient, and area under curve of 0.955, 0.955, 0.914, 0.965, 0.937, and 0.904, respectively as per cross validation. It is expected that the method developed can improve the decision-making process in bridge management systems.


Author(s):  
Shaolei Wang ◽  
Zhongyuan Wang ◽  
Wanxiang Che ◽  
Sendong Zhao ◽  
Ting Liu

Spoken language is fundamentally different from the written language in that it contains frequent disfluencies or parts of an utterance that are corrected by the speaker. Disfluency detection (removing these disfluencies) is desirable to clean the input for use in downstream NLP tasks. Most existing approaches to disfluency detection heavily rely on human-annotated data, which is scarce and expensive to obtain in practice. To tackle the training data bottleneck, in this work, we investigate methods for combining self-supervised learning and active learning for disfluency detection. First, we construct large-scale pseudo training data by randomly adding or deleting words from unlabeled data and propose two self-supervised pre-training tasks: (i) a tagging task to detect the added noisy words and (ii) sentence classification to distinguish original sentences from grammatically incorrect sentences. We then combine these two tasks to jointly pre-train a neural network. The pre-trained neural network is then fine-tuned using human-annotated disfluency detection training data. The self-supervised learning method can capture task-special knowledge for disfluency detection and achieve better performance when fine-tuning on a small annotated dataset compared to other supervised methods. However, limited in that the pseudo training data are generated based on simple heuristics and cannot fully cover all the disfluency patterns, there is still a performance gap compared to the supervised models trained on the full training dataset. We further explore how to bridge the performance gap by integrating active learning during the fine-tuning process. Active learning strives to reduce annotation costs by choosing the most critical examples to label and can address the weakness of self-supervised learning with a small annotated dataset. We show that by combining self-supervised learning with active learning, our model is able to match state-of-the-art performance with just about 10% of the original training data on both the commonly used English Switchboard test set and a set of in-house annotated Chinese data.


Author(s):  
Dima M. Alalharith ◽  
Hajar M. Alharthi ◽  
Wejdan M. Alghamdi ◽  
Yasmine M. Alsenbel ◽  
Nida Aslam ◽  
...  

Computer-based technologies play a central role in the dentistry field, as they present many methods for diagnosing and detecting various diseases, such as periodontitis. The current study aimed to develop and evaluate the state-of-the-art object detection and recognition techniques and deep learning algorithms for the automatic detection of periodontal disease in orthodontic patients using intraoral images. In this study, a total of 134 intraoral images were divided into a training dataset (n = 107 [80%]) and a test dataset (n = 27 [20%]). Two Faster Region-based Convolutional Neural Network (R-CNN) models using ResNet-50 Convolutional Neural Network (CNN) were developed. The first model detects the teeth to locate the region of interest (ROI), while the second model detects gingival inflammation. The detection accuracy, precision, recall, and mean average precision (mAP) were calculated to verify the significance of the proposed model. The teeth detection model achieved an accuracy, precision, recall, and mAP of 100 %, 100%, 51.85%, and 100%, respectively. The inflammation detection model achieved an accuracy, precision, recall, and mAP of 77.12%, 88.02%, 41.75%, and 68.19%, respectively. This study proved the viability of deep learning models for the detection and diagnosis of gingivitis in intraoral images. Hence, this highlights its potential usability in the field of dentistry and aiding in reducing the severity of periodontal disease globally through preemptive non-invasive diagnosis.


Author(s):  
Dmitrii Bakhteev

The article discusses computer vision as a modern technology of automatic processing of graphic images, analyzes the relations between the terms «computer vision» and «machine vision». History of development of this technology is described, it occurred because of improvements in both computer technology and software. The computerization of forensic activities boils down to three areas: speeding up, simplifying, and improving the efficiency of information processing. A schema of a typical computer vision system is given, the possibility of using systems based on artificial neural networks for image analysis is considered. The current state of computer vision application systems and the possibility of its application in order to solve the problems of criminal justice are analyzed. The main areas of application of computer vision in forensic activities are identification of a person on the basis of his appearance, both during operational identification of a person and portrait examinations, photo and video examinations; quantitative assessment of objects in the image (for example, in case of calculating mass events’ participants); at preliminary and expert research of documents and their requisites; in functioning of criminal registration systems. Criteria and technical conditions for sampling signatures for creation of a training dataset for a neural network are given, the basics of developing an artificial neural network recognizing signs of signatures’ forgery is analyzed, which include three steps: creating a training dataset, adjusting weights and training priorities, testing the quality of network training.


Informatics ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 36-43
Author(s):  
R. S. Vashkevich ◽  
E. S. Azarov

The paper investigates the problem of voice activity detection from a noisy sound signal. An extremely compact convolutional neural network is proposed. The model has only 385 trainable parameters. Proposed model doesn’t require a lot of computational resources that allows to use it as part of the “internet of things” concept for compact low power devices. At the same time the model provides state of the art results in voice activity detection in terms of detection accuracy. The properties of the model are achieved by using a special convolutional layer that considers the harmonic structure of vocal speech. This layer also eliminates redundancy of the model because it has invariance to changes of fundamental frequency. The model performance is evaluated in various noise conditions with different signal-to-noise ratios. The results show that the proposed model provides higher accuracy compared to voice activity detection model from the WebRTC framework by Google.


2019 ◽  
Vol 9 (3) ◽  
pp. 75-88
Author(s):  
Sunita Gond ◽  
Shailendra Singh

Load balancing in a cloud environment for handling multiple process of different size is an important issue. Many advanced technologies are incorporated in the processes-based resource allocation which enhances the system efficiency. The steps of allotting resources to process can be done by taking data which helps to analyze and make important decisions at runtime. This article focuses on the allocation of cloud resources where two models were developed, the first was TLBO (Teacher Learning Based Optimization), a genetic algorithm which finds the correct position for the process to execute. Here, some information used for analysis was total number of machines, memory, execution time, etc. So, the output of the TLBO process sequence was used as training input for the Error Back Propagation Neural Network for learning. This trained neural network improved the work job sequence quality. Training was done in such a way that all sets of features were utilized to pair with their process requirement and current position. For increasing the reliability of the work, an experiment was done on a real dataset. Results show that the proposed model has overcome various evaluation parameters on a different scale as compared to previous approaches adopted by researchers.


Sensors ◽  
2018 ◽  
Vol 18 (12) ◽  
pp. 4272 ◽  
Author(s):  
Jun Sang ◽  
Zhongyuan Wu ◽  
Pei Guo ◽  
Haibo Hu ◽  
Hong Xiang ◽  
...  

Vehicle detection is one of the important applications of object detection in intelligent transportation systems. It aims to extract specific vehicle-type information from pictures or videos containing vehicles. To solve the problems of existing vehicle detection, such as the lack of vehicle-type recognition, low detection accuracy, and slow speed, a new vehicle detection model YOLOv2_Vehicle based on YOLOv2 is proposed in this paper. The k-means++ clustering algorithm was used to cluster the vehicle bounding boxes on the training dataset, and six anchor boxes with different sizes were selected. Considering that the different scales of the vehicles may influence the vehicle detection model, normalization was applied to improve the loss calculation method for length and width of bounding boxes. To improve the feature extraction ability of the network, the multi-layer feature fusion strategy was adopted, and the repeated convolution layers in high layers were removed. The experimental results on the Beijing Institute of Technology (BIT)-Vehicle validation dataset demonstrated that the mean Average Precision (mAP) could reach 94.78%. The proposed model also showed excellent generalization ability on the CompCars test dataset, where the “vehicle face” is quite different from the training dataset. With the comparison experiments, it was proven that the proposed method is effective for vehicle detection. In addition, with network visualization, the proposed model showed excellent feature extraction ability.


2019 ◽  
Vol 2019 (2) ◽  
pp. 218-223
Author(s):  
S Yunusova ◽  

The article discusses the modeling of a fuzzy-logical system of regulation of the process of drying of raw cotton. The tasks of overcoming uncertainties arising in the process of operation of technological units at the enterprises of the cotton-cleaning industry are presented. An example of solving such a problem by using an artificial neural network is given. Mathematical models based on the neural network have been developed that are used to formalize the process of drying raw cotton and determine the optimal tuned parameters of the fuzzy-logical PID controller, allowing the fate of changing the operating modes of the technological units of the drying drum. A method for determining the number of synoptic weights of artificial neural networks is proposed, which minimizes the number of trainings and increases the speed of management decisions. To train the neural network weights use the reverse spreading error method. The range of variation of the regulator parameter is justified, taking into account the features of the cotton drying process. As a result, the proposed model was used in the control system of the drying process in terms of quality indicators, which led to an increase in the accuracy of the technological process.


Sign in / Sign up

Export Citation Format

Share Document