affine transforms
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 7)

H-INDEX

9
(FIVE YEARS 1)

Circuit World ◽  
2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Rajasekar P. ◽  
Mangalam H.

Purpose The growing trends in the usage of hand held devices necessitate the need to design them with low power consumption and less area design. Besides, information security is gaining enormous importance in information transmission and data storage technology. In addition, today’s technology world is connected, communicated and controlled via the Internet of Things (IoT). In many applications, the most standard and widely used cryptography algorithm for providing security is Advanced Encryption Standard (AES). This paper aims to design an efficient model of AES cryptography for low power and less area. Design/methodology/approach First, the main issues related to less area and low power consumption in the AES encryption core are addressed. To implement optimized AES core, the authors proposed optimized multiplicative inverse, affine transforms and Xtime multipliers functions, which are the core function of AES’s core. In addition, to achieve the high throughput, it uses the multistage pipeline and resource reuse architectures for SBox and Mixcolumn of AES. Findings The results of optimized AES architecture have revealed that the multistage pipe line and resource sharing are optimal design model in Field Programmable Gate Array (FPGA) implementation. It could provide high security with low power and area for IoT and wireless sensors networks. Originality/value This proposed optimized modified architecture has been implemented in FPGA to calculate the power, area and delay parameters. This multistage pipeline and resource sharing have promised to minimize the area and power.


This paper provides a general indication of the existing approaches rely on basic factor (i.e. extraction of iris information, affine transform, and distance matrix) as input. An essential factor in effective design of IRIS based biometric approach is the accuracy with which the model can estimate a region of interest (i.e. IRIS) within constraints and unforeseen issues, which can be very problematical but need of the hour. We introduced a new IRIS based biometric system that incorporates various factors that takes complete information of eye for developing the feature set (digest) while affine transforms are not incorporated while the three sets distance measure is incepted to enhance accuracy. The algorithm-based size of template, functionality of distance measure and/or scope, methods and/or function of application through well-defined scientific and statistical principles. Unfortunately, the accuracy of the existing approaches is limited despite the large scale of experience with several improvements based on digital image processing and statistical models. Henceforth, we incorporated several texture analysis algorithms with computing techniques along with several parametric enhancement constraints to ensure the feasibility, effectiveness and efficiency of the proposed framework in comparison with the existing methods.


2020 ◽  
Vol 14 ◽  
pp. 174830262097352
Author(s):  
Anis Theljani ◽  
Ke Chen

Different from image segmentation, developing a deep learning network for image registration is less straightforward because training data cannot be prepared or supervised by humans unless they are trivial (e.g. pre-designed affine transforms). One approach for an unsupervised deep leaning model is to self-train the deformation fields by a network based on a loss function with an image similarity metric and a regularisation term, just with traditional variational methods. Such a function consists in a smoothing constraint on the derivatives and a constraint on the determinant of the transformation in order to obtain a spatially smooth and plausible solution. Although any variational model may be used to work with a deep learning algorithm, the challenge lies in achieving robustness. The proposed algorithm is first trained based on a new and robust variational model and tested on synthetic and real mono-modal images. The results show how it deals with large deformation registration problems and leads to a real time solution with no folding. It is then generalised to multi-modal images. Experiments and comparisons with learning and non-learning models demonstrate that this approach can deliver good performances and simultaneously generate an accurate diffeomorphic transformation.


2019 ◽  
Vol 277 ◽  
pp. 02032
Author(s):  
Simon R Lang ◽  
Martin H Luerssen ◽  
David M Powers

In Computer Vision, finding simple features is performed using classifiers called interest point (IP) detectors, which are often utilised to track features as the scene changes. For 2D based classifiers it has been intuitive to measure repeated point reliability using 2D metrics given the difficulty to establish ground truth beyond 2D. The aim is to bridge the gap between 2D classifiers and 3D environments, and improve performance analysis of 2D IP classification on 3D objects. This paper builds on existing work with 3D scanned and artificial models to test conventional 2D feature detectors with the assistance of virtualised 3D scenes. Virtual space depth is leveraged in tests to perform pre-selection of closest repeatable points in both 2D and 3D contexts before repeatability is measured. This more reliable ground truth is used to analyse testing configurations with a singular and 12 model dataset across affine transforms in x, y and z rotation, as well as x, y scaling with 9 well known IP detectors. The virtual scene's ground truth demonstrates that 3D preselection eliminates a large portion of false positives that are normally considered repeated in 2D configurations. The results indicate that 3D virtual environments can provide assistance in comparing the performance of conventional detectors when extending their applications to 3D environments, and can result in better classification of features when testing prospective classifiers' performance. A ROC based informedness measure also highlights tradeoffs in 2D/3D performance compared to conventional repeatability measures.


2019 ◽  
Vol 07 (01) ◽  
pp. 33-45 ◽  
Author(s):  
Okechi Onuoha ◽  
Hilton Tnunay ◽  
Zhenhong Li ◽  
Zhengtao Ding

This paper presents novel affine formation algorithms and implementations in different scenarios for the coordination of multi-agent systems with triple-integrator agent dynamics for both sampled-data and continuous-time settings. The agents in affine maneuver control are to be capable of producing required geometric shapes and simultaneously accomplishing desired maneuvers such as shearing, rotation, translation and scaling. From existing work, these tasks can be accomplished for systems whose agent dynamics are described using double-integrators and the agents communicate continuously in time. In some practical situations, however, the inter-agent communication may be limited to periodic intervals of time. Furthermore, a wide range of systems is governed by complex dynamics described with higher-orders. This paper presents two novel algorithms based on triple-integrator agent dynamics. Four implementation cases comprising of two scenarios each studied in both continuous-time and sampled-data cases are considered. Under the proposed algorithms, the collection of agents are capable of tracking time-varying targets which are affine transforms of the reference formation, if the leaders have knowledge of the required formation maneuvers. Detailed implementation results are presented to demonstrate the efficacy of the proposed algorithms.


2018 ◽  
Vol 7 (2.8) ◽  
pp. 42
Author(s):  
D Rajasekhar ◽  
T Jayachandra Prasad ◽  
K Soundararajan

Feature detection and image matching constitutes two primary tasks in photogrammetric and have multiple applications in a number of fields. One such application is face recognition. The critical nature of this application demands that image matching algorithm used in recognition of features in facial recognition to be robust and fast. The proposed method uses affine transforms to recognize the descriptors and classified by means of Bayes theorem. This paper demonstrates the suitability of the proposed image matching algorithm for use in face recognition appli-cations. Yale facial data set is used in the validation and the results are compared with SIFT (Scale Invariant Feature Transform) based face recognition approach.


Sign in / Sign up

Export Citation Format

Share Document