scholarly journals HDFNet: Hierarchical Dynamic Fusion Network for Change Detection in Optical Aerial Images

2021 ◽  
Vol 13 (8) ◽  
pp. 1440
Author(s):  
Yi Zhang ◽  
Lei Fu ◽  
Ying Li ◽  
Yanning Zhang

Accurate change detection in optical aerial images by using deep learning techniques has been attracting lots of research efforts in recent years. Correct change-detection results usually involve both global and local deep learning features. Existing deep learning approaches have achieved good performance on this task. However, under the scenarios of containing multiscale change areas within a bi-temporal image pair, existing methods still have shortcomings in adapting these change areas, such as false detection and limited completeness in detected areas. To deal with these problems, we design a hierarchical dynamic fusion network (HDFNet) to implement the optical aerial image-change detection task. Specifically, we propose a change-detection framework with hierarchical fusion strategy to provide sufficient information encouraging for change detection and introduce dynamic convolution modules to self-adaptively learn from this information. Also, we use a multilevel supervision strategy with multiscale loss functions to supervise the training process. Comprehensive experiments are conducted on two benchmark datasets, LEBEDEV and LEVIR-CD, to verify the effectiveness of the proposed method and the experimental results show that our model achieves state-of-the-art performance.

2019 ◽  
Vol 16 (9) ◽  
pp. 4044-4052 ◽  
Author(s):  
Rohini Goel ◽  
Avinash Sharma ◽  
Rajiv Kapoor

The deep learning approaches have drawn much focus of the researchers in the area of object recognition because of their implicit strength of conquering the shortcomings of classical approaches dependent on hand crafted features. In the last few years, the deep learning techniques have been made many developments in object recognition. This paper indicates some recent and efficient deep learning frameworks for object recognition. The up to date study on recently developed a deep neural network based object recognition methods is presented. The various benchmark datasets that are used for performance evaluation are also discussed. The applications of the object recognition approach for specific types of objects (like faces, buildings, plants etc.) are also highlighted. We conclude up with the merits and demerits of existing methods and future scope in this area.


2021 ◽  
Vol 22 (15) ◽  
pp. 7911
Author(s):  
Eugene Lin ◽  
Chieh-Hsin Lin ◽  
Hsien-Yuan Lane

A growing body of evidence currently proposes that deep learning approaches can serve as an essential cornerstone for the diagnosis and prediction of Alzheimer’s disease (AD). In light of the latest advancements in neuroimaging and genomics, numerous deep learning models are being exploited to distinguish AD from normal controls and/or to distinguish AD from mild cognitive impairment in recent research studies. In this review, we focus on the latest developments for AD prediction using deep learning techniques in cooperation with the principles of neuroimaging and genomics. First, we narrate various investigations that make use of deep learning algorithms to establish AD prediction using genomics or neuroimaging data. Particularly, we delineate relevant integrative neuroimaging genomics investigations that leverage deep learning methods to forecast AD on the basis of incorporating both neuroimaging and genomics data. Moreover, we outline the limitations as regards to the recent AD investigations of deep learning with neuroimaging and genomics. Finally, we depict a discussion of challenges and directions for future research. The main novelty of this work is that we summarize the major points of these investigations and scrutinize the similarities and differences among these investigations.


Sensors ◽  
2021 ◽  
Vol 21 (13) ◽  
pp. 4486
Author(s):  
Niall O’Mahony ◽  
Sean Campbell ◽  
Lenka Krpalkova ◽  
Anderson Carvalho ◽  
Joseph Walsh ◽  
...  

Fine-grained change detection in sensor data is very challenging for artificial intelligence though it is critically important in practice. It is the process of identifying differences in the state of an object or phenomenon where the differences are class-specific and are difficult to generalise. As a result, many recent technologies that leverage big data and deep learning struggle with this task. This review focuses on the state-of-the-art methods, applications, and challenges of representation learning for fine-grained change detection. Our research focuses on methods of harnessing the latent metric space of representation learning techniques as an interim output for hybrid human-machine intelligence. We review methods for transforming and projecting embedding space such that significant changes can be communicated more effectively and a more comprehensive interpretation of underlying relationships in sensor data is facilitated. We conduct this research in our work towards developing a method for aligning the axes of latent embedding space with meaningful real-world metrics so that the reasoning behind the detection of change in relation to past observations may be revealed and adjusted. This is an important topic in many fields concerned with producing more meaningful and explainable outputs from deep learning and also for providing means for knowledge injection and model calibration in order to maintain user confidence.


2021 ◽  
Vol 11 (11) ◽  
pp. 4753
Author(s):  
Gen Ye ◽  
Chen Du ◽  
Tong Lin ◽  
Yan Yan ◽  
Jack Jiang

(1) Background: Deep learning has become ubiquitous due to its impressive performance in various domains, such as varied as computer vision, natural language and speech processing, and game-playing. In this work, we investigated the performance of recent deep learning approaches on the laryngopharyngeal reflux (LPR) diagnosis task. (2) Methods: Our dataset is composed of 114 subjects with 37 pH-positive cases and 77 control cases. In contrast to prior work based on either reflux finding score (RFS) or pH monitoring, we directly take laryngoscope images as inputs to neural networks, as laryngoscopy is the most common and simple diagnostic method. The diagnosis task is formulated as a binary classification problem. We first tested a powerful backbone network that incorporates residual modules, attention mechanism and data augmentation. Furthermore, recent methods in transfer learning and few-shot learning were investigated. (3) Results: On our dataset, the performance is the best test classification accuracy is 73.4%, while the best AUC value is 76.2%. (4) Conclusions: This study demonstrates that deep learning techniques can be applied to classify LPR images automatically. Although the number of pH-positive images used for training is limited, deep network can still be capable of learning discriminant features with the advantage of technique.


Author(s):  
S. Su ◽  
T. Nawata ◽  
T. Fuse

Abstract. Automatic building change detection has become a topical issue owing to its wide range of applications, such as updating building maps. However, accurate building change detection remains challenging, particularly in urban areas. Thus far, there has been limited research on the use of the outdated building map (the building map before the update, referred to herein as the old-map) to increase the accuracy of building change detection. This paper presents a novel deep-learning-based method for building change detection using bitemporal aerial images containing RGB bands, bitemporal digital surface models (DSMs), and an old-map. The aerial images have two types of spatial resolutions, 12.5 cm or 16 cm, and the cell size of the DSMs is 50 cm × 50 cm. The bitemporal aerial images, the height variations calculated using the differences between the bitemporal DSMs, and the old-map were fed into a network architecture to build an automatic building change detection model. The performance of the model was quantitatively and qualitatively evaluated for an urban area that covered approximately 10 km2 and contained over 21,000 buildings. The results indicate that it can detect the building changes with optimum accuracy as compared to other methods that use inputs such as i) bitemporal aerial images only, ii) bitemporal aerial images and bitemporal DSMs, and iii) bitemporal aerial images and an old-map. The proposed method achieved recall rates of 89.3%, 88.8%, and 99.5% for new, demolished, and other buildings, respectively. The results also demonstrate that the old-map is an effective data source for increasing building change detection accuracy.


Author(s):  
L. Madhuanand ◽  
F. Nex ◽  
M. Y. Yang

Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.


Author(s):  
Bosede Iyiade Edwards ◽  
Nosiba Hisham Osman Khougali ◽  
Adrian David Cheok

With recent focus on deep neural network architectures for development of algorithms for computer-aided diagnosis (CAD), we provide a review of studies within the last 3 years (2015-2017) reported in selected top journals and conferences. 29 studies that met our inclusion criteria were reviewed to identify trends in this field and to inform future development. Studies have focused mostly on cancer-related diseases within internal medicine while diseases within gender-/age-focused fields like gynaecology/pediatrics have not received much focus. All reviewed studies employed image datasets, mostly sourced from publicly available databases (55.2%) and few based on data from human subjects (31%) and non-medical datasets (13.8%), while CNN architecture was employed in most (70%) of the studies. Confirmation of the effect of data manipulation on quality of output and adoption of multi-class rather than binary classification also require more focus. Future studies should leverage collaborations with medical experts to aid future with actual clinical testing with reporting based on some generally applicable index to enable comparison. Our next steps on plans for CAD development for osteoarthritis (OA), with plans to consider multi-class classification and comparison across deep learning approaches and unsupervised architectures were also highlighted.


2022 ◽  
pp. 27-50
Author(s):  
Rajalaxmi Prabhu B. ◽  
Seema S.

A lot of user-generated data is available these days from huge platforms, blogs, websites, and other review sites. These data are usually unstructured. Analyzing sentiments from these data automatically is considered an important challenge. Several machine learning algorithms are implemented to check the opinions from large data sets. A lot of research has been undergone in understanding machine learning approaches to analyze sentiments. Machine learning mainly depends on the data required for model building, and hence, suitable feature exactions techniques also need to be carried. In this chapter, several deep learning approaches, its challenges, and future issues will be addressed. Deep learning techniques are considered important in predicting the sentiments of users. This chapter aims to analyze the deep-learning techniques for predicting sentiments and understanding the importance of several approaches for mining opinions and determining sentiment polarity.


IoT ◽  
2020 ◽  
Vol 1 (2) ◽  
pp. 551-604
Author(s):  
Damien Warren Fernando ◽  
Nikos Komninos ◽  
Thomas Chen

This survey investigates the contributions of research into the detection of ransomware malware using machine learning and deep learning algorithms. The main motivations for this study are the destructive nature of ransomware, the difficulty of reversing a ransomware infection, and how important it is to detect it before infecting a system. Machine learning is coming to the forefront of combatting ransomware, so we attempted to identify weaknesses in machine learning approaches and how they can be strengthened. The threat posed by ransomware is exceptionally high, with new variants and families continually being found on the internet and dark web. Recovering from ransomware infections is difficult, given the nature of the encryption schemes used by them. The increase in the use of artificial intelligence also coincides with this boom in ransomware. The exploration into machine learning and deep learning approaches when it comes to detecting ransomware poses high interest because machine learning and deep learning can detect zero-day threats. These techniques can generate predictive models that can learn the behaviour of ransomware and use this knowledge to detect variants and families which have not yet been seen. In this survey, we review prominent research studies which all showcase a machine learning or deep learning approach when detecting ransomware malware. These studies were chosen based on the number of citations they had by other research. We carried out experiments to investigate how the discussed research studies are impacted by malware evolution. We also explored the new directions of ransomware and how we expect it to evolve in the coming years, such as expansion into IoT (Internet of Things), with IoT being integrated more into infrastructures and into homes.


Sign in / Sign up

Export Citation Format

Share Document