Image Depth Analysis: From Deep Learning to Parallel Cluster Computing

Author(s):  
Lian Ding ◽  
Wei Du
Solar Energy ◽  
2021 ◽  
Vol 224 ◽  
pp. 855-867
Author(s):  
Quentin Paletta ◽  
Guillaume Arbod ◽  
Joan Lasenby

Author(s):  
L. Madhuanand ◽  
F. Nex ◽  
M. Y. Yang

Abstract. Depth is an essential component for various scene understanding tasks and for reconstructing the 3D geometry of the scene. Estimating depth from stereo images requires multiple views of the same scene to be captured which is often not possible when exploring new environments with a UAV. To overcome this monocular depth estimation has been a topic of interest with the recent advancements in computer vision and deep learning techniques. This research has been widely focused on indoor scenes or outdoor scenes captured at ground level. Single image depth estimation from aerial images has been limited due to additional complexities arising from increased camera distance, wider area coverage with lots of occlusions. A new aerial image dataset is prepared specifically for this purpose combining Unmanned Aerial Vehicles (UAV) images covering different regions, features and point of views. The single image depth estimation is based on image reconstruction techniques which uses stereo images for learning to estimate depth from single images. Among the various available models for ground-level single image depth estimation, two models, 1) a Convolutional Neural Network (CNN) and 2) a Generative Adversarial model (GAN) are used to learn depth from aerial images from UAVs. These models generate pixel-wise disparity images which could be converted into depth information. The generated disparity maps from these models are evaluated for its internal quality using various error metrics. The results show higher disparity ranges with smoother images generated by CNN model and sharper images with lesser disparity range generated by GAN model. The produced disparity images are converted to depth information and compared with point clouds obtained using Pix4D. It is found that the CNN model performs better than GAN and produces depth similar to that of Pix4D. This comparison helps in streamlining the efforts to produce depth from a single aerial image.


Author(s):  
M. Sester ◽  
Y. Feng ◽  
F. Thiemann

<p><strong>Abstract.</strong> Cartographic generalization is a problem, which poses interesting challenges to automation. Whereas plenty of algorithms have been developed for the different sub-problems of generalization (e.g. simplification, displacement, aggregation), there are still cases, which are not generalized adequately or in a satisfactory way. The main problem is the interplay between different operators. In those cases the benchmark is the human operator, who is able to design an aesthetic and correct representation of the physical reality.</p><p>Deep Learning methods have shown tremendous success for interpretation problems for which algorithmic methods have deficits. A prominent example is the classification and interpretation of images, where deep learning approaches outperform the traditional computer vision methods. In both domains &amp;ndash; computer vision and cartography &amp;ndash; humans are able to produce a solution; a prerequisite for this is, that there is the possibility to generate many training examples for the different cases. Thus, the idea in this paper is to employ Deep Learning for cartographic generalizations tasks, especially for the task of building generalization. An advantage of this task is the fact that many training data sets are available from given map series. The approach is a first attempt using an existing network.</p><p>In the paper, the details of the implementation will be reported, together with an in depth analysis of the results. An outlook on future work will be given.</p>


Cancers ◽  
2021 ◽  
Vol 13 (23) ◽  
pp. 6048
Author(s):  
Joanna Jaworek-Korjakowska ◽  
Andrzej Brodzicki ◽  
Bill Cassidy ◽  
Connah Kendrick ◽  
Moi Hoon Yap

Over the past few decades, different clinical diagnostic algorithms have been proposed to diagnose malignant melanoma in its early stages. Furthermore, the detection of skin moles driven by current deep learning based approaches yields impressive results in the classification of malignant melanoma. However, in all these approaches, the researchers do not take into account the origin of the skin lesion. It has been observed that the specific criteria for in situ and early invasive melanoma highly depend on the anatomic site of the body. To address this problem, we propose a deep learning architecture based framework to classify skin lesions into the three most important anatomic sites, including the face, trunk and extremities, and acral lesions. In this study, we take advantage of pretrained networks, including VGG19, ResNet50, Xception, DenseNet121, and EfficientNetB0, to calculate the features with an adjusted and densely connected classifier. Furthermore, we perform in depth analysis on database, architecture, and result regarding the effectiveness of the proposed framework. Experiments confirm the ability of the developed algorithms to classify skin lesions into the most important anatomical sites with 91.45% overall accuracy for the EfficientNetB0 architecture, which is a state-of-the-art result in this domain.


2020 ◽  
Vol 92 ◽  
pp. 106272 ◽  
Author(s):  
Kuo-Kun Tseng ◽  
Yaqi Zhang ◽  
Qinglin Zhu ◽  
K.L. Yung ◽  
W.H. Ip

2018 ◽  
Vol 29 (3) ◽  
pp. 67-88 ◽  
Author(s):  
Wen Zeng ◽  
Hongjiao Xu ◽  
Hui Li ◽  
Xiang Li

In the big data era, it is a great challenge to identify high-level abstract features out of a flood of sci-tech literature to achieve in-depth analysis of data. The deep learning technology has developed rapidly and achieved applications in many fields, but has rarely been utilized in the research of sci-tech literature data. This article introduced the presentation method of vector space of terminologies in sci-tech literature based on the deep learning model. It explored and adopted a deep AE model to reduce the dimensionality of input word vector feature. Also put forward is the methodology of correlation analysis of sci-tech literature based on deep learning technology. The experimental results showed that the processing of sci-tech literature data could be simplified into the computation of vectors in the multi-dimensional vector space, and the similarity in vector space could be used to represent similarity in text semantics. The correlation analysis of subject contents between sci-tech literatures of the same or different types can be made using this method.


2020 ◽  
Vol 10 (10) ◽  
pp. 2459-2465
Author(s):  
Iftikhar Ahmad ◽  
Muhammad Javed Iqbal ◽  
Mohammad Basheri

The size of data gathered from various ongoing biological and clinically studies is increasing at an exponential rate. The bio-inspired data mainly comprises of genes of DNA, protein and variety of proteomics and genetic diseases. Additionally, DNA microarray data is also available for early diagnosis and prediction of various types of cancer diseases. Interestingly, this data may store very vital information about genes, their structure and important biological function. The huge volume and constant increase in the extracted bio data has opened several challenges. Many bioinformatics and machine learning models have been developed but those fail to address key challenges presents in the efficient and accurate analysis of variety of complex biologically inspired data such as genetic diseases etc. The reliable and robust process of classifying the extracted data into different classes based on the information hidden in the sample data is also a very interesting and open problem. This research work mainly focuses to overcome major challenges in the accurate protein classification keeping in view of the success of deep learning models in natural language processing since it assumes the proteins sequences as a language. The learning ability and overall classification performance of the proposed system can be validated with deep learning classification models. The proposed system can have the superior ability to accurately classify the mentioned datasets than previous approaches and shows better results. The in-depth analysis of multifaceted biological data may also help in the early diagnosis of diseases that causes due to mutation of genes and to overcome arising challenges in the development of large-scale healthcare systems.


2020 ◽  
Author(s):  
Aman Gupta ◽  
Yadul Raghav

The problem of predicting links has gained much attention in recent years due to its vast application in various domains such as sociology, network analysis, information science, etc. Many methods have been proposed for link prediction such as RA, AA, CCLP, etc. These methods required hand-crafted structural features to calculate the similarity scores between a pair of nodes in a network. Some methods use local structural information while others use global information of a graph. These methods do not tell which properties are better than others. With an in-depth analysis of these methods, we understand that one way to overcome this problem is to consider network structure and node attribute information to capture the discriminative features for link prediction tasks. We proposed a deep learning Autoencoder based Link Prediction (ALP) architecture for the latent representation of a graph, unified with non-negative matrix factorization to automatically determine the underlying roles in a network, after that assigning a mixed-membership of these roles to each node in the network. The idea is to transfer these roles as a feature vector for the link prediction task in the network. Further, cosine similarity is applied after getting the required features to compute the pairwise similarity score between the nodes. We present the performance of the algorithm on the real-world datasets, where it gives the competitive result compared to other algorithms.


Sign in / Sign up

Export Citation Format

Share Document