layer image
Recently Published Documents


TOTAL DOCUMENTS

62
(FIVE YEARS 19)

H-INDEX

7
(FIVE YEARS 3)

Author(s):  
SU YAN ◽  
Lei Yu

Abstract Simultaneous Localization and Mapping (SLAM) is one of the key technologies used in sweepers, autonomous vehicles, virtual reality and other fields. This paper presents a dense RGB-D SLAM reconstruction algorithm based on convolutional neural network of multi-layer image invariant feature transformation. The main contribution of the system lies in the construction of a convolutional neural network based on multi-layer image invariant feature, which optimized the extraction of ORB (Oriented FAST and Rotated Brief) feature points and the reconstruction effect. After the feature point matching, pose estimation, loop detection and other steps, the 3D point clouds were finally spliced to construct a complete and smooth spatial model. The system can improve the accuracy and robustness in feature point processing and pose estimation. Comparative experiments show that the optimized algorithm saves 0.093s compared to the ordinary extraction algorithm while guaranteeing a high accuracy rate at the same time. The results of reconstruction experiments show that the spatial models have more clear details, smoother connection with no fault layers than the original ones. The reconstruction results are generally better than other common algorithms, such as Kintinuous, Elasticfusion and ORBSLAM2 dense reconstruction.


2021 ◽  
Vol 7 ◽  
pp. e364
Author(s):  
Omar M. Elzeki ◽  
Mohamed Abd Elfattah ◽  
Hanaa Salem ◽  
Aboul Ella Hassanien ◽  
Mahmoud Shams

Background and Purpose COVID-19 is a new strain of viruses that causes life stoppage worldwide. At this time, the new coronavirus COVID-19 is spreading rapidly across the world and poses a threat to people’s health. Experimental medical tests and analysis have shown that the infection of lungs occurs in almost all COVID-19 patients. Although Computed Tomography of the chest is a useful imaging method for diagnosing diseases related to the lung, chest X-ray (CXR) is more widely available, mainly due to its lower price and results. Deep learning (DL), one of the significant popular artificial intelligence techniques, is an effective way to help doctors analyze how a large number of CXR images is crucial to performance. Materials and Methods In this article, we propose a novel perceptual two-layer image fusion using DL to obtain more informative CXR images for a COVID-19 dataset. To assess the proposed algorithm performance, the dataset used for this work includes 87 CXR images acquired from 25 cases, all of which were confirmed with COVID-19. The dataset preprocessing is needed to facilitate the role of convolutional neural networks (CNN). Thus, hybrid decomposition and fusion of Nonsubsampled Contourlet Transform (NSCT) and CNN_VGG19 as feature extractor was used. Results Our experimental results show that imbalanced COVID-19 datasets can be reliably generated by the algorithm established here. Compared to the COVID-19 dataset used, the fuzed images have more features and characteristics. In evaluation performance measures, six metrics are applied, such as QAB/F, QMI, PSNR, SSIM, SF, and STD, to determine the evaluation of various medical image fusion (MIF). In the QMI, PSNR, SSIM, the proposed algorithm NSCT + CNN_VGG19 achieves the greatest and the features characteristics found in the fuzed image is the largest. We can deduce that the proposed fusion algorithm is efficient enough to generate CXR COVID-19 images that are more useful for the examiner to explore patient status. Conclusions A novel image fusion algorithm using DL for an imbalanced COVID-19 dataset is the crucial contribution of this work. Extensive results of the experiment display that the proposed algorithm NSCT + CNN_VGG19 outperforms competitive image fusion algorithms.


2021 ◽  
Vol 53 ◽  
pp. 585-593
Author(s):  
Jinwoo Song ◽  
Young Moon
Keyword(s):  

Author(s):  
Jinwoo Song ◽  
Harika Bandaru ◽  
Xinyu He ◽  
Zhenyang Qiu ◽  
Young B. Moon

Abstract In Additive Manufacturing (AM), detecting cyber-attacks on infill structure is difficult because interior defects can occur without affecting the exterior. To detect the infill defectives quickly, layer-by-layer image inspection in real-time can be conducted. However, collecting the layered images from the top view in real-time is challenging because the 3D printer’s extruder interferes with objects from being perfectly scanned. Using a dummy model to move the extruder out of the object’s layer has been proposed. However, it is not practical because it creates printing delays and wasted printing materials. To enable infill layered image collection in real-time without delays and material waste, this research proposes a layered image collection method using an algorithm identifying a pseudo area in a layered image. The algorithm detects the pseudo area — the area covered by the extruder — using an image processing technique, such as an average pooling and max pooling. It accumulates the non-pseudo areas until a complete layered image is acquired. To validate and evaluate the proposed method, captured images were evaluated with various machine learning algorithms.


Security of data (text, audio, and images) is becoming more complex with the increment in its amount. In order to upsurge the reliability, the captcha (Completely Automated Public Turing test to tell Computers and Humans Apart) is used to ensure authenticity. In contrast, even these captchas can be hacked and security can be easily impeached, aim of these captchas is to identify if the user is genuine or else if it is just a robot trying to spam the system. This paper presents auxiliary hybridization of AES and Blowfish cryptographic algorithms for image encipherment and decipherment. Here, AES is using Blowfish as its subroutine where Blowfish encrypts and decrypts the AES encoded image. This is then handed to AES for second level decryption. Here the image which is to be encrypted is applied to AES algorithm, its output is further used as an input for Blowfish algorithm. Output of this doubly encrypted image is then decrypted in the reverse order of encipherment. This auxiliary hybridization adds security to the image rendering it the capacity to become useful in highly important organizations. Private key cryptography uses single secret key at both, the sender and the receiver end. Using symmetric key cryptographic algorithm for this process makes the complete process fast and more secure in comparison to when asymmetric cryptographic algorithms are used for the same purpose. Moreover, symmetric key cryptographic algorithms are more suitable for larger files and images. These also help in maintaining the confidentiality of the data.


Sensors ◽  
2020 ◽  
Vol 20 (6) ◽  
pp. 1556 ◽  
Author(s):  
Zhenyu Li ◽  
Aiguo Zhou ◽  
Yong Shen

Scene recognition is an essential part in the vision-based robot navigation domain. The successful application of deep learning technology has triggered more extensive preliminary studies on scene recognition, which all use extracted features from networks that are trained for recognition tasks. In the paper, we interpret scene recognition as a region-based image retrieval problem and present a novel approach for scene recognition with an end-to-end trainable Multi-column convolutional neural network (MCNN) architecture. The proposed MCNN utilizes filters with receptive fields of different sizes to have Multi-level and Multi-layer image perception, and consists of three components: front-end, middle-end and back-end. The first seven layers VGG16 are taken as front-end for two-dimensional feature extraction, Inception-A is taken as the middle-end for deeper learning feature representation, and Large-Margin Softmax Loss (L-Softmax) is taken as the back-end for enhancing intra-class compactness and inter-class-separability. Extensive experiments have been conducted to evaluate the performance according to compare our proposed network to existing state-of-the-art methods. Experimental results on three popular datasets demonstrate the robustness and accuracy of our approach. To the best of our knowledge, the presented approach has not been applied for the scene recognition in literature.


Sign in / Sign up

Export Citation Format

Share Document