scholarly journals STN-Homography: Direct Estimation of Homography Parameters for Image Pairs

2019 ◽  
Vol 9 (23) ◽  
pp. 5187 ◽  
Author(s):  
Qiang Zhou ◽  
Xin Li

Estimating a 2D homography from a pair of images is a fundamental task in computer vision. Contrary to most convolutional neural network-based homography estimation methods that use alternative four-point homography parameterization schemes, in this study, we directly estimate the 3 × 3 homography matrix value. We show that after coordinate normalization, the magnitude difference and variance of the elements of the normalized 3 × 3 homography matrix is very small. Accordingly, we present STN-Homography, a neural network based on spatial transformer network (STN), to directly estimate the normalized homography matrix of an image pair. To decrease the homography estimation error, we propose hierarchical STN-Homography and sequence STN-homography models in which the sequence STN-Homography can be trained in an end-to-end manner. The effectiveness of the proposed methods is demonstrated based on experiments on the Microsoft common objects in context (MSCOCO) dataset, and it is shown that they significantly outperform the current state-of-the-art. The average processing time of the three-stage hierarchical STN-Homography and the three-stage sequence STN-Homography models on a GPU are 17.85 ms and 13.85 ms, respectively. Both models satisfy the real-time processing requirements of most potential applications.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Yangfan Xu ◽  
Xianqun Fan ◽  
Yang Hu

AbstractEnzyme-catalyzed proximity labeling (PL) combined with mass spectrometry (MS) has emerged as a revolutionary approach to reveal the protein-protein interaction networks, dissect complex biological processes, and characterize the subcellular proteome in a more physiological setting than before. The enzymatic tags are being upgraded to improve temporal and spatial resolution and obtain faster catalytic dynamics and higher catalytic efficiency. In vivo application of PL integrated with other state of the art techniques has recently been adapted in live animals and plants, allowing questions to be addressed that were previously inaccessible. It is timely to summarize the current state of PL-dependent interactome studies and their potential applications. We will focus on in vivo uses of newer versions of PL and highlight critical considerations for successful in vivo PL experiments that will provide novel insights into the protein interactome in the context of human diseases.


Author(s):  
Giulia Ischia ◽  
Luca Fiori

Abstract Hydrothermal carbonization (HTC) is an emerging path to give a new life to organic waste and residual biomass. Fulfilling the principles of the circular economy, through HTC “unpleasant” organics can be transformed into useful materials and possibly energy carriers. The potential applications of HTC are tremendous and the recent literature is full of investigations. In this context, models capable to predict, simulate and optimize the HTC process, reactors, and plants are engineering tools that can significantly shift HTC research towards innovation by boosting the development of novel enterprises based on HTC technology. This review paper addresses such key-issue: where do we stand regarding the development of these tools? The literature presents many and simplified models to describe the reaction kinetics, some dealing with the process simulation, while few focused on the heart of an HTC system, the reactor. Statistical investigations and some life cycle assessment analyses also appear in the current state of the art. This work examines and analyzes these predicting tools, highlighting their potentialities and limits. Overall, the current models suffer from many aspects, from the lack of data to the intrinsic complexity of HTC reactions and HTC systems. Therefore, the emphasis is given to what is still necessary to make the HTC process duly simulated and therefore implementable on an industrial scale with sufficient predictive margins. Graphic Abstract


Author(s):  
Esteban Real ◽  
Alok Aggarwal ◽  
Yanping Huang ◽  
Quoc V. Le

The effort devoted to hand-crafting neural network image classifiers has motivated the use of architecture search to discover them automatically. Although evolutionary algorithms have been repeatedly applied to neural network topologies, the image classifiers thus discovered have remained inferior to human-crafted ones. Here, we evolve an image classifier— AmoebaNet-A—that surpasses hand-designs for the first time. To do this, we modify the tournament selection evolutionary algorithm by introducing an age property to favor the younger genotypes. Matching size, AmoebaNet-A has comparable accuracy to current state-of-the-art ImageNet models discovered with more complex architecture-search methods. Scaled to larger size, AmoebaNet-A sets a new state-of-theart 83.9% top-1 / 96.6% top-5 ImageNet accuracy. In a controlled comparison against a well known reinforcement learning algorithm, we give evidence that evolution can obtain results faster with the same hardware, especially at the earlier stages of the search. This is relevant when fewer compute resources are available. Evolution is, thus, a simple method to effectively discover high-quality architectures.


1995 ◽  
Vol 11 (3) ◽  
pp. 431-455 ◽  
Author(s):  
Steven D. Glaser ◽  
Riley M. Chung

This report examines the state-of-the-art of in situ methods of estimating liquefaction potential in sands. In situ methods are especially important since “undisturbed” samples of loose sand for laboratory testing are virtually unobtainable. Various penetration test methods are examined, such as the SPT, DMT, and the CPT and variants. These methods are completely empirical in nature, and have worked well to date. The current state-of-practice is an SPT-based method. Intrusive, seismic-based tests are also examined: the cross-hole, down-hole tests, and down-hole logger. The seismic velocity-based predictors have a stronger physical basis than the penetration test-based estimation methods, but need a larger database. A non-intrusive test, the Spectral Analysis of Surface Waves technique, seems especially suited for examining sites of large areal extent.


Sensors ◽  
2020 ◽  
Vol 20 (24) ◽  
pp. 7071
Author(s):  
Takehiro Kashiyama ◽  
Hideaki Sobue ◽  
Yoshihide Sekimoto

The use of drones and other unmanned aerial vehicles has expanded rapidly in recent years. These devices are expected to enter practical use in various fields, such as taking measurements through aerial photography and transporting small and lightweight objects. Simultaneously, concerns over these devices being misused for terrorism or other criminal activities have increased. In response, several sensor systems have been developed to monitor drone flights. In particular, with the recent progress of deep neural network technology, the monitoring of systems using image processing has been proposed. This study developed a monitoring system for flying objects using a 4K camera and a state-of-the-art convolutional neural network model to achieve real-time processing. We installed a monitoring system in a high-rise building in an urban area during this study and evaluated the precision with which it could detect flying objects at different distances under different weather conditions. The results obtained provide important information for determining the accuracy of monitoring systems with image processing in practice.


2018 ◽  
Vol 232 ◽  
pp. 01061
Author(s):  
Danhua Li ◽  
Xiaofeng Di ◽  
Xuan Qu ◽  
Yunfei Zhao ◽  
Honggang Kong

Pedestrian detection aims to localize and recognize every pedestrian instance in an image with a bounding box. The current state-of-the-art method is Faster RCNN, which is such a network that uses a region proposal network (RPN) to generate high quality region proposals, while Fast RCNN is used to classifiers extract features into corresponding categories. The contribution of this paper is integrated low-level features and high-level features into a Faster RCNN-based pedestrian detection framework, which efficiently increase the capacity of the feature. Through our experiments, we comprehensively evaluate our framework, on the Caltech pedestrian detection benchmark and our methods achieve state-of-the-art accuracy and present a competitive result on Caltech dataset.


2014 ◽  
Vol 2014 ◽  
pp. 1-9 ◽  
Author(s):  
Sarah E. Norred ◽  
Jacqueline Anne Johnson

Magnetic resonance-guided laser induced thermotherapy (MRgLITT) has become an increasingly relevant therapy for tumor ablation due to its minimally invasive approach and broad applicability across many tissue types. The current state of the art applies laser irradiation via cooled optical fiber applicators in order to generate ablative heat and necrosis in tumor tissue. Magnetic resonance temperature imaging (MRTI) is used concurrently with this therapy to plan treatments and visualize tumor necrosis. Though application in neurosurgery remains in its infancy, MRgLITT has been found to be a promising therapy for many types of brain tumors. This review examines the current use of MRgLITT with regard to the special clinical challenge of glioblastoma multiforme and examines the potential applications of next-generation nanotherapy specific to the treatment of glioblastoma.


2003 ◽  
Vol 10 (01) ◽  
pp. 127-146 ◽  
Author(s):  
J. C. ARNAULT

The potential applications of diamond in the field of electronics working under high power and high temperature (aeronautic, aerospace, etc.) require highly oriented films on heterosubstrates. This is the key motivation of the huge research efforts that have been carried out during the last ten years. Very significant progress has been accomplished and nowadays diamond films with misorientations close to 1.5° are elaborated on β-SiC monocrystals. Moreover, an excellent crystalline quality with polar and azimuthal misalignments lower than 0.6° is reported for diamond films grown on iridium buffer layers. Unfortunately, these films are still too defective for high power electronics applications. To achieve higher crystalline quality, further improvements of the deposition methods are needed. Nevertheless, a deeper knowledge of the elemental mechanisms occurring during the early stages of growth is also essential. The first part of this paper focuses on the state of the art of the different investigated ways towards heteroepitaxy. Secondly, the present knowledge of the early stages of diamond nucleation and growth on silicon substrates for both classical nucleation and bias-assisted nucleation (BEN) is reviewed. Finally, the remaining questions concerning the understanding of the nucleation processes are discussed.


2019 ◽  
Vol 12 (2) ◽  
pp. 103
Author(s):  
Kuntoro Adi Nugroho ◽  
Yudi Eko Windarto

Various methods are available to perform feature extraction on satellite images. Among the available alternatives, deep convolutional neural network (ConvNet) is the state of the art method. Although previous studies have reported successful attempts on developing and implementing ConvNet on remote sensing application, several issues are not well explored, such as the use of depthwise convolution, final pooling layer size, and comparison between grayscale and RGB settings. The objective of this study is to perform analysis to address these issues. Two feature learning algorithms were proposed, namely ConvNet as the current state of the art for satellite image classification and Gray Level Co-occurence Matrix (GLCM) which represents a classic unsupervised feature extraction method. The experiment demonstrated consistent result with previous studies that ConvNet is superior in most cases compared to GLCM, especially with 3x3xn final pooling. The performance of the learning algorithms are much higher on features from RGB channels, except for ConvNet with relatively small number of features.


Author(s):  
Hao Zhu ◽  
Shenghua Gao

Deep Convolutional Neural Network (DCNN) based deep hashing has shown its success for fast and accurate image retrieval, however directly minimizing the quantization error in deep hashing will change the distribution of DCNN features, and consequently change the similarity between the query and the retrieved images in hashing. In this paper, we propose a novel Locality-Constrained Deep Supervised Hashing. By simultaneously learning discriminative DCNN features and preserving the similarity between image pairs, the hash codes of our scheme preserves the distribution of DCNN features thus favors the accurate image retrieval.The contributions of this paper are two-fold: i) Our analysis shows that minimizing quantization error in deep hashing makes the features less discriminative which is not desirable for image retrieval; ii) We propose a Locality-Constrained Deep Supervised Hashing which preserves the similarity between image pairs in hashing.Extensive experiments on the CIFARA-10 and NUS-WIDE datasets show that our method significantly boosts the accuracy of image retrieval, especially on the CIFAR-10 dataset, the improvement is usually more than 6% in terms of the MAP measurement. Further, our method demonstrates 10 times faster than state-of-the-art methods in the training phase.


Sign in / Sign up

Export Citation Format

Share Document