scholarly journals Optimisation of Point-Set Matching Model for Robust Fingerprint Verification in Changing Weather Conditions

Author(s):  
I. J. Udo ◽  
B. I. Akhigbe ◽  
B. S. Afolabi

Aims: To provide a baseline for the configuration of Automated Fingerprint Verification System (AFVS) in the face of changing weather and environmental conditions in order to ensure performance accuracy.  Study Design:  Statistical and theoretical research approaches. Place and Duration of Study: Department of Computer Science and Engineering, Obafemi Awolowo University, Ile-Ife, Nigeria, between July 2017 and July 2018. Methodology: Data set were collected in the South-South geopolitical zone of Nigeria. We use 10,000 minutiae points defined by location and orientation features extracted from fingerprint samples obtained at 9 various physical and environmental conditions over 12 months period. These data were used to formulate linear regression models that were used as constraints to the verification objective function derived as constrained linear least squares. The effects of the changing weather and environmental conditions were incorporated into the optimised point-set matching model in order to minimise the total relative error on location and orientation differences between pairs of minutiae. The model was implemented using interior-point convex quadratic programming was implemented in Matlab. Results: The results obtained from the optimisation function by adjusting the thresholds of the effects of weather and environmental conditions to 0.0, 0.0 for location and orientation properties of minutiae, respectively, showed minimal total relative errors on the corresponding pairs of matched minutiae, when compared with using the default threshold values of the selected conditions. Conclusion: The optimisation of point-set based model could provide a computational basis for accurate fingerprint verification for low and high-security AFVS in unfavourable conditions if they are incorporated into the matching model. However, further validation and evaluation of the model with data sets from regions with similar weather and environmental conditions is needed to further validate its robustness in terms of performance accuracy

2020 ◽  
Author(s):  
Robert L. Peach ◽  
Alexis Arnaudon ◽  
Julia A. Schmidt ◽  
Henry A. Palasciano ◽  
Nathan R. Bernier ◽  
...  

AbstractNetworks are widely used as mathematical models of complex systems across many scientific disciplines, not only in biology and medicine but also in the social sciences, physics, computing and engineering. Decades of work have produced a vast corpus of research characterising the topological, combinatorial, statistical and spectral properties of graphs. Each graph property can be thought of as a feature that captures important (and some times overlapping) characteristics of a network. In the analysis of real-world graphs, it is crucial to integrate systematically a large number of diverse graph features in order to characterise and classify networks, as well as to aid network-based scientific discovery. In this paper, we introduce HCGA, a framework for highly comparative analysis of graph data sets that computes several thousands of graph features from any given network. HCGA also offers a suite of statistical learning and data analysis tools for automated identification and selection of important and interpretable features underpinning the characterisation of graph data sets. We show that HCGA outperforms other methodologies on supervised classification tasks on benchmark data sets whilst retaining the interpretability of network features. We also illustrate how HCGA can be used for network-based discovery through two examples where data is naturally represented as graphs: the clustering of a data set of images of neuronal morphologies, and a regression problem to predict charge transfer in organic semiconductors based on their structure. HCGA is an open platform that can be expanded to include further graph properties and statistical learning tools to allow researchers to leverage the wide breadth of graph-theoretical research to quantitatively analyse and draw insights from network data.


1993 ◽  
Vol 23 (8) ◽  
pp. 1725-1731 ◽  
Author(s):  
Michael S. Williams ◽  
Timothy G. Gregoire

The method of weighted least squares can be used to achieve homogeneity of variance with linear regression that has a heterogeneous error structure. A weight function commonly used when constructing regression equations to predict tree volume is [Formula: see text], where k1 ≈ 1.0–2.1. This paper examines the weight function [Formula: see text] for modelling the error structure in two loblolly pine (Pinustaeda L.) data sets and one white oak (Quercusalba L.) data set. The weight function [Formula: see text] is recommended for all three data sets, for which the k1 values ranged from 1.80 to 2.07.


2019 ◽  
Author(s):  
Anouk Bomers ◽  
Ralph Schielen ◽  
Suzanne Hulscher

Abstract. Flood frequency curves are usually highly uncertain since they are based on short data sets of measured discharges or weather conditions. To decrease the confidence intervals, an efficient bootstrap method is developed in this study. The Rhine river delta is considered as a case study. A hydraulic model is used to normalize historic flood events for anthropogenic and natural changes in the river system. As a result, the data set of measured discharges could be extended with approximately 600 years. The study shows that flood events decrease the confidence interval of the flood frequency curve significantly, specifically in the range of large floods. This even applies if the maximum discharges of these historic flood events are highly uncertain themselves.


2019 ◽  
Vol 19 (8) ◽  
pp. 1895-1908
Author(s):  
Anouk Bomers ◽  
Ralph M. J. Schielen ◽  
Suzanne J. M. H. Hulscher

Abstract. Flood frequency curves are usually highly uncertain since they are based on short data sets of measured discharges or weather conditions. To decrease the confidence intervals, an efficient bootstrap method is developed in this study. The Rhine river delta is considered as a case study. We use a hydraulic model to normalize historic flood events for anthropogenic and natural changes in the river system. As a result, the data set of measured discharges could be extended by approximately 600 years. The study shows that historic flood events decrease the confidence interval of the flood frequency curve significantly, specifically in the range of large floods. This even applies if the maximum discharges of these historic flood events are highly uncertain themselves.


1995 ◽  
Vol 75 (4) ◽  
pp. 767-774
Author(s):  
H.-H. Mündel ◽  
T. Entz ◽  
J. P. Braun ◽  
F. A. Kiehn

Additive main effects and multiplicative interaction (AMMI) analysis of Safflower Cooperative Registration Test (SCRT) data gathered from 1984 to 1991 across the Canadian prairies was used to assess the possibility of reducing the number of locations for cultivar evaluation. The cultivars Saffire, Hartman, S-208, and S-541 were included in the 1984–1986 data set; and Saffire, AC Stirling, S-208, and S-541 in the 1988–1991 set. Seed yield, percent oil, days to maturity, and test weight were measured at 12 locations, although due to weather conditions, data were sometimes not available for all locations in any given year. The AMMI model fit the data well for all four traits, and indicated that among-year variability at a given location was usually higher than inter-location variability in a given year. Cultivar interaction effects for all four characteristics assessed were usually large for both data sets, indicating that differences among cultivars at a given location can vary considerably over years. Intra-location variability was not consistent for the four traits and no clear grouping of locations or locations with cultivars over years was evident. These results suggest that local environmental factors significantly influence safflower traits, and potential cultivars need to be evaluated at as many locations as resources permit. Key words:Carthamus tinctorius, cultivar × environment interactions, yield, oil, maturity, test weight


Author(s):  
M. G. Kratzenberg ◽  
H. G. Beyer ◽  
S. Colle ◽  
A. Albertazzi ◽  
S. Gu¨ths ◽  
...  

Outdoor collector tests are inherently performed under variable weather conditions. Whereas ISO 9806 sets strong restrictions for the conditions of usable data sets for the steady-state collector test SST, EN12975 allows more variable ambient conditions for the quasi-dynamic collector test QDT. This results in shorter collector test times, but could have drawbacks for the uncertainties including the reproducibility of the test results, i.e. the parameters of the collector model. As the weather conditions are never the same within several tests, outdoor collector tests are not repeatable only reproducible. It is thus to be expected, that the uncertainties of the collector parameters gained by a quasi-dynamic test are higher than those from the steady-state test. In this paper we evaluate the collector parameters and their uncertainties for a covered collector using both the SST, and QDT test methods. As basis for this comparison, we apply a large data set from 2 months of operation under quasi-dynamic conditions. This set is then separated into various single data sets which either fulfill the conditions for a complete steady-state or a complete quasi-dynamic test. For the quasi-dynamic test various sets could be identified. For each of these tests, the parameters and their uncertainties are calculated. This allows for the comparison of both, the model coefficients and their uncertainties. It is tested whether the coefficients extracted from each of the ‘quasi-dynamic sets’ are in coherence or stable within a 95% confidence by using statistical procedures.


2018 ◽  
Vol 154 (2) ◽  
pp. 149-155
Author(s):  
Michael Archer

1. Yearly records of worker Vespula germanica (Fabricius) taken in suction traps at Silwood Park (28 years) and at Rothamsted Research (39 years) are examined. 2. Using the autocorrelation function (ACF), a significant negative 1-year lag followed by a lesser non-significant positive 2-year lag was found in all, or parts of, each data set, indicating an underlying population dynamic of a 2-year cycle with a damped waveform. 3. The minimum number of years before the 2-year cycle with damped waveform was shown varied between 17 and 26, or was not found in some data sets. 4. Ecological factors delaying or preventing the occurrence of the 2-year cycle are considered.


2018 ◽  
Vol 21 (2) ◽  
pp. 117-124 ◽  
Author(s):  
Bakhtyar Sepehri ◽  
Nematollah Omidikia ◽  
Mohsen Kompany-Zareh ◽  
Raouf Ghavami

Aims & Scope: In this research, 8 variable selection approaches were used to investigate the effect of variable selection on the predictive power and stability of CoMFA models. Materials & Methods: Three data sets including 36 EPAC antagonists, 79 CD38 inhibitors and 57 ATAD2 bromodomain inhibitors were modelled by CoMFA. First of all, for all three data sets, CoMFA models with all CoMFA descriptors were created then by applying each variable selection method a new CoMFA model was developed so for each data set, 9 CoMFA models were built. Obtained results show noisy and uninformative variables affect CoMFA results. Based on created models, applying 5 variable selection approaches including FFD, SRD-FFD, IVE-PLS, SRD-UVEPLS and SPA-jackknife increases the predictive power and stability of CoMFA models significantly. Result & Conclusion: Among them, SPA-jackknife removes most of the variables while FFD retains most of them. FFD and IVE-PLS are time consuming process while SRD-FFD and SRD-UVE-PLS run need to few seconds. Also applying FFD, SRD-FFD, IVE-PLS, SRD-UVE-PLS protect CoMFA countor maps information for both fields.


Author(s):  
Kyungkoo Jun

Background & Objective: This paper proposes a Fourier transform inspired method to classify human activities from time series sensor data. Methods: Our method begins by decomposing 1D input signal into 2D patterns, which is motivated by the Fourier conversion. The decomposition is helped by Long Short-Term Memory (LSTM) which captures the temporal dependency from the signal and then produces encoded sequences. The sequences, once arranged into the 2D array, can represent the fingerprints of the signals. The benefit of such transformation is that we can exploit the recent advances of the deep learning models for the image classification such as Convolutional Neural Network (CNN). Results: The proposed model, as a result, is the combination of LSTM and CNN. We evaluate the model over two data sets. For the first data set, which is more standardized than the other, our model outperforms previous works or at least equal. In the case of the second data set, we devise the schemes to generate training and testing data by changing the parameters of the window size, the sliding size, and the labeling scheme. Conclusion: The evaluation results show that the accuracy is over 95% for some cases. We also analyze the effect of the parameters on the performance.


2019 ◽  
Vol 73 (8) ◽  
pp. 893-901
Author(s):  
Sinead J. Barton ◽  
Bryan M. Hennelly

Cosmic ray artifacts may be present in all photo-electric readout systems. In spectroscopy, they present as random unidirectional sharp spikes that distort spectra and may have an affect on post-processing, possibly affecting the results of multivariate statistical classification. A number of methods have previously been proposed to remove cosmic ray artifacts from spectra but the goal of removing the artifacts while making no other change to the underlying spectrum is challenging. One of the most successful and commonly applied methods for the removal of comic ray artifacts involves the capture of two sequential spectra that are compared in order to identify spikes. The disadvantage of this approach is that at least two recordings are necessary, which may be problematic for dynamically changing spectra, and which can reduce the signal-to-noise (S/N) ratio when compared with a single recording of equivalent duration due to the inclusion of two instances of read noise. In this paper, a cosmic ray artefact removal algorithm is proposed that works in a similar way to the double acquisition method but requires only a single capture, so long as a data set of similar spectra is available. The method employs normalized covariance in order to identify a similar spectrum in the data set, from which a direct comparison reveals the presence of cosmic ray artifacts, which are then replaced with the corresponding values from the matching spectrum. The advantage of the proposed method over the double acquisition method is investigated in the context of the S/N ratio and is applied to various data sets of Raman spectra recorded from biological cells.


Sign in / Sign up

Export Citation Format

Share Document