scholarly journals Web application with data centric approach to ship powering prediction using deep learning

2022 ◽  
pp. 100226
Author(s):  
Jauhari Khairuddin ◽  
Adi Maimun ◽  
Kazuo Hiekata ◽  
Chee Loon Siow ◽  
Arifah Ali
2019 ◽  
Vol 8 (2S11) ◽  
pp. 3721-3724

With the invention of deep learning, there is a good progress in image classification. But automatic generation of captions for images is still a challenging problem and is in the initial stages of artificial intelligence research. Automatic description of images has applications in social networking and will be useful to visually impaired persons. This paper concentrates on designing a user-friendly web application framework which can predict the caption of an image using deep learning techniques. The verbs and objects present in the caption are used for forming the emoji and for predicting the major color of the image


Dengue cases has become endemic in Malaysia. The cost of operation to exterminate mosquito habitats are also high. To do effective operation, information from community are crucial. But, without knowing the characteristic of Aedes larvae it is hard to recognize the larvae without guide from the expert. The use of deep learning in image classification and recognition is crucial to tackle this problem. The purpose of this project is to conduct a study of characteristics of Aedes larvae and determine the best convolutional neural network model in classifying the mosquito larvae. 3 performance evaluation vector which is accuracy, log-loss and AUC-ROC will be used to measure the model’s individual performance. Then performance category which consist of Accuracy Score, Loss Score, File Size Score and Training Time Score will be used to evaluate which model is the best to be implemented into web application or mobile application. From the score collected for each model, ResNet50 has proved to be the best model in classifying the mosquito larvae species.


In Service Oriented Architecture (SOA) web services plays important role. Web services are web application components that can be published, found, and used on the Web. Also machine-to-machine communication over a network can be achieved through web services. Cloud computing and distributed computing brings lot of web services into WWW. Web service composition is the process of combing two or more web services to together to satisfy the user requirements. Tremendous increase in the number of services and the complexity in user requirement specification make web service composition as challenging task. The automated service composition is a technique in which Web Service Composition can be done automatically with minimal or no human intervention. In this paper we propose a approach of web service composition methods for large scale environment by considering the QoS Parameters. We have used stacked autoencoders to learn features of web services. Recurrent Neural Network (RNN) leverages uses the learned features to predict the new composition. Experiment results show the efficiency and scalability. Use of deep learning algorithm in web service composition, leads to high success rate and less computational cost.


Author(s):  
Qusay Abdullah Abed ◽  
Osamah Mohammed Fadhil ◽  
Wathiq Laftah Al-Yaseen

In general, multidimensional data (mobile application for example) contain a large number of unnecessary information. Web app users find it difficult to get the information needed quickly and effectively due to the sheer volume of data (big data produced per second). In this paper, we tend to study the data mining in web personalization using blended deep learning model. So, one of the effective solutions to this problem is web personalization. As well as, explore how this model helps to analyze and estimate the huge amounts of operations. Providing personalized recommendations to improve reliability depends on the web application using useful information in the web application. The results of this research are important for the training and testing of large data sets for a map of deep mixed learning based on the model of back-spread neural network. The HADOOP framework was used to perform a number of experiments in a different environment with a learning rate between -1 and +1. Also, using the number of techniques to evaluate the number of parameters, true positive cases are represent and fall into positive cases in this example to evaluate the proposed model.


Author(s):  
Amel Imene Hadj Bouzid ◽  
Said Yahiaoui ◽  
Anis Lounis ◽  
Sid-Ahmed Berrani ◽  
Hacène Belbachir ◽  
...  

Coronavirus disease is a pandemic that has infected millions of people around the world. Lung CT-scans are effective diagnostic tools, but radiologists can quickly become overwhelmed by the flow of infected patients. Therefore, automated image interpretation needs to be achieved. Deep learning (DL) can support critical medical tasks including diagnostics, and DL algorithms have successfully been applied to the classification and detection of many diseases. This work aims to use deep learning methods that can classify patients between Covid-19 positive and healthy patient. We collected 4 available datasets, and tested our convolutional neural networks (CNNs) on different distributions to investigate the generalizability of our models. In order to clearly explain the predictions, Grad-CAM and Fast-CAM visualization methods were used. Our approach reaches more than 92% accuracy on 2 different distributions. In addition, we propose a computer aided diagnosis web application for Covid-19 diagnosis. The results suggest that our proposed deep learning tool can be integrated to the Covid-19 detection process and be useful for a rapid patient management.


2019 ◽  
Author(s):  
J. Kubach ◽  
A. Muehlebner-Farngruber ◽  
F. Soylemezoglu ◽  
H. Miyata ◽  
P. Niehusmann ◽  
...  

AbstractWe trained a convolutional neural network (CNN) to classify H.E. stained microscopic images of focal cortical dysplasia type IIb (FCD IIb) and cortical tuber of tuberous sclerosis complex (TSC). Both entities are distinct subtypes of human malformations of cortical development that share histopathological features consisting of neuronal dyslamination with dysmorphic neurons and balloon cells. The microscopic review of routine stainings of such surgical specimens remains challenging. A digital processing pipeline was developed for a series of 56 FCD IIb and TSC cases to obtain 4000 regions of interest and 200.000 sub-samples with different zoom and rotation angles to train a CNN. Our best performing network achieved 91% accuracy and 0.88 AUCROC (area under the receiver operating characteristic curve) on a hold-out test-set. Guided gradient-weighted class activation maps visualized morphological features used by the CNN to distinguish both entities. We then developed a web application, which combined the visualization of whole slide images (WSI) with the possibility for classification between FCD IIb and TSC on demand by our pretrained and build-in CNN classifier. This approach might help to introduce deep learning applications for the histopathologic diagnosis of rare and difficult-to-classify brain lesions.


2021 ◽  
Author(s):  
Ali Moradi Vartouni ◽  
Matin Shokri ◽  
Mohammad Teshnehlab

Protecting websites and applications from cyber-threats is vital for any organization. A Web application firewall (WAF) prevents attacks to damaging applications. This provides a web security by filtering and monitoring traffic network to protect against attacks. A WAF solution based on the anomaly detection can identify zero-day attacks. Deep learning is the state-of-the-art method that is widely used to detect attacks in the anomaly-based WAF area. Although deep learning has demonstrated excellent results on anomaly detection tasks in web requests, there is trade-off between false-positive and missed-attack rates which is a key problem in WAF systems. On the other hand, anomaly detection methods suffer adjusting threshold-level to distinguish attack and normal traffic. In this paper, first we proposed a model based on Deep Support Vector Data Description (Deep SVDD), then we compare two feature extraction strategies, one-hot and bigram, on the raw requests. Second to overcome threshold challenges, we introduce a novel end-to-end algorithm Auto-Threshold Deep SVDD (ATDSVDD) to determine an appropriate threshold during the learning process. As a result we compare our model with other deep models on CSIC-2010 and ECML/PKDD-2007 datasets. Results show ATDSVDD on bigram feature data have better performance in terms of accuracy and generalization. <br>


Sign in / Sign up

Export Citation Format

Share Document