scholarly journals Deep Learning for Anomaly Detection

2021 ◽  
Vol 54 (2) ◽  
pp. 1-38
Author(s):  
Guansong Pang ◽  
Chunhua Shen ◽  
Longbing Cao ◽  
Anton Van Den Hengel

Anomaly detection, a.k.a. outlier detection or novelty detection, has been a lasting yet active research area in various research communities for several decades. There are still some unique problem complexities and challenges that require advanced approaches. In recent years, deep learning enabled anomaly detection, i.e., deep anomaly detection , has emerged as a critical direction. This article surveys the research of deep anomaly detection with a comprehensive taxonomy, covering advancements in 3 high-level categories and 11 fine-grained categories of the methods. We review their key intuitions, objective functions, underlying assumptions, advantages, and disadvantages and discuss how they address the aforementioned challenges. We further discuss a set of possible future opportunities and new perspectives on addressing the challenges.


Semantic Web ◽  
2020 ◽  
pp. 1-16
Author(s):  
Francesco Beretta

This paper addresses the issue of interoperability of data generated by historical research and heritage institutions in order to make them re-usable for new research agendas according to the FAIR principles. After introducing the symogih.org project’s ontology, it proposes a description of the essential aspects of the process of historical knowledge production. It then develops an epistemological and semantic analysis of conceptual data modelling applied to factual historical information, based on the foundational ontologies Constructive Descriptions and Situations and DOLCE, and discusses the reasons for adopting the CIDOC CRM as a core ontology for the field of historical research, but extending it with some relevant, missing high-level classes. Finally, it shows how collaborative data modelling carried out in the ontology management environment OntoME makes it possible to elaborate a communal fine-grained and adaptive ontology of the domain, provided an active research community engages in this process. With this in mind, the Data for history consortium was founded in 2017 and promotes the adoption of a shared conceptualization in the field of historical research.



Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4719
Author(s):  
Malik Haris ◽  
Jin Hou

Nowadays, autonomous vehicle is an active research area, especially after the emergence of machine vision tasks with deep learning. In such a visual navigation system for autonomous vehicle, the controller captures images and predicts information so that the autonomous vehicle can safely navigate. In this paper, we first introduced small and medium-sized obstacles that were intentionally or unintentionally left on the road, which can pose hazards for both autonomous and human driving situations. Then, we discuss Markov random field (MRF) model by fusing three potentials (gradient potential, curvature prior potential, and depth variance potential) to segment the obstacles and non-obstacles into the hazardous environment. Since the segment of obstacles is done by MRF model, we can predict the information to safely navigate the autonomous vehicle form hazardous environment on the roadway by DNN model. We found that our proposed method can segment the obstacles accuracy from the blended background road and improve the navigation skills of the autonomous vehicle.



2019 ◽  
Vol 53 (1-2) ◽  
pp. 3-17
Author(s):  
A Anandh ◽  
K Mala ◽  
R Suresh Babu

Nowadays, user expects image retrieval systems using a large database as an active research area for the investigators. Generally, content-based image retrieval system retrieves the images based on the low-level features, high-level features, or the combination of both. Content-based image retrieval results can be improved by considering various features like directionality, contrast, coarseness, busyness, local binary pattern, and local tetra pattern with modified binary wavelet transform. In this research work, appropriate features are identified, applied and results are validated against existing systems. Modified binary wavelet transform is a modified form of binary wavelet transform and this methodology produced more similar retrieval images. The proposed system also combines the interactive feedback to retrieve the user expected results by addressing the issues of semantic gap. The quantitative evaluations such as average retrieval rate, false image acceptation ratio, and false image rejection ratio are evaluated to ensure the user expected results of the system. In addition to that, precision and recall are evaluated from the proposed system against the existing system results. When compared with the existing content-based image retrieval methods, the proposed approach provides better retrieval accuracy.



Author(s):  
Mohamed Loey ◽  
Mukdad Rasheed Naman ◽  
Hala Helmy Zayed

Blood disease detection and diagnosis using blood cells images is an interesting and active research area in both the computer and medical fields. There are many techniques developed to examine blood samples to detect leukemia disease, these techniques are the traditional techniques and the deep learning (DL) technique. This article presents a survey on the different traditional techniques and DL approaches that have been employed in blood disease diagnosis based on blood cells images and to compare between the two approaches in quality of assessment, accuracy, cost and speed. This article covers 19 studies, 11 of these studies were in traditional techniques which used image processing and machine learning (ML) algorithms such as K-means, K-nearest neighbor (KNN), Naïve Bayes, Support Vector Machine (SVM), and 8 studies in advanced techniques which used DL, particularly Convolutional Neural Networks (CNNs) which is the most widely used in the field of blood image diseases detection since it is highly accurate, fast, and has the least cost. In addition, it analyzes a number of recent works that have been introduced in the field including the size of the dataset, the used methodologies, the obtained results, etc. Finally, based on the conducted study, it can be concluded that the proposed system CNN was achieving huge successes in the field whether regarding features extraction or classification task, time, accuracy, and had a lower cost in the detection of leukemia diseases.



2014 ◽  
Vol 556-562 ◽  
pp. 6419-6422
Author(s):  
Hao Li Ren ◽  
Xiao Peng Liang ◽  
Kong Yang Peng

Network traffic monitoring, analysis, and anomaly detection have become a very active research area in the networking community over the past few years. Traffic monitoring and analysis is essential in order to more effectively troubleshoot and resolve issues when they occur, so as to not bring network services to a stand still for extended periods of time. This paper discusses router based monitoring techniques in the WAN traffic monitoring. It gives an overview of the two most widely used router based network monitoring tools available (SNMP, cisco netflow), and provides an example about the netflow technology.



2021 ◽  
Vol 3 ◽  
Author(s):  
Dan Luo ◽  
Wei Zeng ◽  
Jinlong Chen ◽  
Wei Tang

Deep learning has become an active research topic in the field of medical image analysis. In particular, for the automatic segmentation of stomatological images, great advances have been made in segmentation performance. In this paper, we systematically reviewed the recent literature on segmentation methods for stomatological images based on deep learning, and their clinical applications. We categorized them into different tasks and analyze their advantages and disadvantages. The main categories that we explored were the data sources, backbone network, and task formulation. We categorized data sources into panoramic radiography, dental X-rays, cone-beam computed tomography, multi-slice spiral computed tomography, and methods based on intraoral scan images. For the backbone network, we distinguished methods based on convolutional neural networks from those based on transformers. We divided task formulations into semantic segmentation tasks and instance segmentation tasks. Toward the end of the paper, we discussed the challenges and provide several directions for further research on the automatic segmentation of stomatological images.



Software effort estimation is big and active research area. Software effort estimation is useful for time and efforts required to perform a particular task. But, it is very rare to estimate the effort with high level of reliability. There are various approaches to estimate the software application effort. In the present paper, to estimate the effort for software applications efforts, neutrosophic logic approach is used. Neutrosophic logic is a mathematical model for ambiguity, uncertainty, incompleteness, vagueness, redundancy, contradiction and inconsistency in data. It is the extension to the fuzzy logic. It is capable of handling those errors which are not handled by fuzzy logic like indeterminacy in the data. Neutrosophic logic gives the results very similar to human thinking. The present work concludes that neutrosophic logic optimizes the performance of fuzzy logic while calculating the software efforts..



2021 ◽  
Vol 13 (8) ◽  
pp. 1591
Author(s):  
Teng Zhong ◽  
Cheng Ye ◽  
Zian Wang ◽  
Guoan Tang ◽  
Wei Zhang ◽  
...  

Precise urban façade color is the foundation of urban color planning. Nevertheless, existing research on urban colors usually relies on manual sampling due to technical limitations, which brings challenges for evaluating urban façade color with the co-existence of city-scale and fine-grained resolution. In this study, we propose a deep learning-based approach for mapping the urban façade color using street-view imagery. The dominant color of the urban façade (DCUF) is adopted as an indicator to describe the urban façade color. A case study in Shenzhen was conducted to measure the urban façade color using Baidu Street View (BSV) panoramas, with city-scale mapping of the urban façade color in both irregular geographical units and regular grids. Shenzhen’s urban façade color has a gray tone with low chroma. The results demonstrate that the proposed method has a high level of accuracy for the extraction of the urban façade color. In short, this study contributes to the development of urban color planning by efficiently analyzing the urban façade color with higher levels of validity across city-scale areas. Insights into the mapping of the urban façade color from the humanistic perspective could facilitate higher quality urban space planning and design.



For years’ radiologist and clinician continues to employs various approaches, machine learning algorithms included to detect, diagnose, and prevent diseases using medical imaging. Recent advances in deep learning made medical imaging analysis and processing an active research area, various algorithms for segmentation, detection, and classification have been proposed. In this survey, we describe the trends of deep learning algorithms use in medical imaging, their architecture, hardware, and software used are all discussed. We concluded with the proposed model for brain lesion segmentation and classification using Magnetic Resonance Images (MRI).



2021 ◽  
Vol 11 (8) ◽  
pp. 1055
Author(s):  
Ali Fawzi ◽  
Anusha Achuthan ◽  
Bahari Belaton

Brain image segmentation is one of the most time-consuming and challenging procedures in a clinical environment. Recently, a drastic increase in the number of brain disorders has been noted. This has indirectly led to an increased demand for automated brain segmentation solutions to assist medical experts in early diagnosis and treatment interventions. This paper aims to present a critical review of the recent trend in segmentation and classification methods for brain magnetic resonance images. Various segmentation methods ranging from simple intensity-based to high-level segmentation approaches such as machine learning, metaheuristic, deep learning, and hybridization are included in the present review. Common issues, advantages, and disadvantages of brain image segmentation methods are also discussed to provide a better understanding of the strengths and limitations of existing methods. From this review, it is found that deep learning-based and hybrid-based metaheuristic approaches are more efficient for the reliable segmentation of brain tumors. However, these methods fall behind in terms of computation and memory complexity.



Sign in / Sign up

Export Citation Format

Share Document