scholarly journals Identification of fingers on the basis of Hamiltonian cycles of local features

The problem of finding the lengths of Hamiltonian cycles on complex graphs is considered. The task has such practical applications as determining the optimal routes (salesman's task), identifying graph structures (recognizing the characteristics of local features of biometric objects), etc. When solving the task of verification of biometric samples, the problems of addition or disappearance of reference points, deformation of the distances between them, the appearance of linear and angular displacements of the whole sample emerges. Using the method described in the article, the problem of displacements can be eliminated, as the solution is stable when shuffling of the points is present. Moreover, it is possible to obtain reference plans with the same stability. Obtaining them requires less computational complexity and provides greater recognition accuracy. A detailed description of the problem solution based on the application of the method of branches and boundaries for symmetric matrices of graphs, which describe the distribution of local features in the images of fingerprints, has been proposed. It is known that a guaranteed solution for finding the length of the Hamiltonian cycle for an arbitrary graph of the planar distribution of points is possible only by using an exhaustive search. However, the computational complexity of such a search is not acceptable. The method of branches and boundaries, like all existing methods of directional search, does not guarantee finding a solution with an arbitrarily large dimension of the graph. Therefore, a method of decomposing graphs is proposed, which allows reducing a complex problem to a set of simpler ones. That allows for a significant reduction in computational complexity. The relative invariance of the metrics of Hamiltonian cycles to probabilistic shifts, which are characteristic of biometric pattern recognition problems, has been shown.

Author(s):  
W. Liu

Planning the path is the most important task in the mobile robot navigation. This task involves basically three aspects. First, the planned path must run from a given starting point to a given endpoint. Secondly, it should ensure robot’s collision-free movement. Thirdly, among all the possible paths that meet the first two requirements it must be, in a certain sense, optimal.Methods of path planning can be classified according to different characteristics. In the context of using intelligent technologies, they can be divided into traditional methods and heuristic ones. By the nature of the environment, it is possible to divide planning methods into planning methods in a static environment and in a dynamic one (it should be noted, however, that a static environment is rare). Methods can also be divided according to the completeness of information about the environment, namely methods with complete information (in this case the issue is a global path planning) and methods with incomplete information (usually, this refers to the situational awareness in the immediate vicinity of the robot, in this case it is a local path planning). Note that incomplete information about the environment can be a consequence of the changing environment, i.e. in a dynamic environment, there is, usually, a local path planning.Literature offers a great deal of methods for path planning where various heuristic techniques are used, which, as a rule, result from the denotative meaning of the problem being solved. This review discusses the main approaches to the problem solution. Here we can distinguish five classes of basic methods: graph-based methods, methods based on cell decomposition, use of potential fields, optimization methods, фтв methods based on intelligent technologies.Many methods of path planning, as a result, give a chain of reference points (waypoints) connecting the beginning and end of the path. This should be seen as an intermediate result. The problem to route the reference points along the constructed chain arises. It is called the task of smoothing the path, and the review addresses this problem as well.


2021 ◽  
Author(s):  
Mukhamet Nurpeiissov ◽  
Askat Kuzdeuov ◽  
Aslan Assylkhanov, ◽  
Yerbolat Khassanov ◽  
Hüseyin Atakan Varol

This paper addresses sequential indoor localization using WiFi and Inertial Measurement Unit (IMU) modules commonly found in commercial off-the-shelf smartphones. Specifically, we developed an end-to-end neural network-based localization system integrating WiFi received signal strength indicator (RSSI) and IMU data without external data fusion models. The developed system leverages the advantages of WiFi and IMU modules to locate finer-level sequential positions of a user at 150 Hz sampling rate. Additionally, to demonstrate the efficacy of the proposed approach, we created the IMUWiFine dataset comprising IMU and WiFi RSSI readings sequentially collected at fine-level reference points. The dataset contains 120 trajectories covering an aggregate distance of over 14 kilometers. We conducted extensive experiments using deep learning models and achieved a mean error distance of 1.1 meters on an unseen evaluation set, which makes our approach suitable for many practical applications requiring meter-level accuracy. To enable experiment and result reproducibility, we made the developed localization system and IMUWiFine dataset publicly available in our GitHub repository.<br>


Symmetry ◽  
2020 ◽  
Vol 12 (5) ◽  
pp. 719
Author(s):  
Lina Lu ◽  
Wanpeng Zhang ◽  
Xueqiang Gu ◽  
Xiang Ji ◽  
Jing Chen

The Monte Carlo Tree Search (MCTS) has demonstrated excellent performance in solving many planning problems. However, the state space and the branching factors are huge, and the planning horizon is long in many practical applications, especially in the adversarial environment. It is computationally expensive to cover a sufficient number of rewarded states that are far away from the root in the flat non-hierarchical MCTS. Therefore, the flat non-hierarchical MCTS is inefficient for dealing with planning problems with a long planning horizon, huge state space, and branching factors. In this work, we propose a novel hierarchical MCTS-based online planning method named the HMCTS-OP to tackle this issue. The HMCTS-OP integrates the MAXQ-based task hierarchies and the hierarchical MCTS algorithms into the online planning framework. Specifically, the MAXQ-based task hierarchies reduce the search space and guide the search process. Therefore, the computational complexity is significantly reduced. Moreover, the reduction in the computational complexity enables the MCTS to perform a deeper search to find better action in a limited time. We evaluate the performance of the HMCTS-OP in the domain of online planning in the asymmetric adversarial environment. The experiment results show that the HMCTS-OP outperforms other online planning methods in this domain.


Author(s):  
Pokpong Amornvit ◽  
Sasiwimol Sanohkan

Face scanners promise wide applications in medicine and dentistry, including facial recognition, capturing facial emotions, facial cosmetic planning and surgery, and maxillofacial rehabilitation. Higher accuracy improves the quality of the data recorded from the face scanner, which ultimately, will improve the outcome. Although there are various face scanners available on the market, there is no evidence of a suitable face scanner for practical applications. The aim of this in vitro study was to analyze the face scans obtained from four scanners; EinScan Pro (EP), EinScan Pro 2X Plus (EP+) (Shining 3D Tech. Co., Ltd. Hangzhou, China), iPhone X (IPX) (Apple Store, Cupertino, CA, USA), and Planmeca ProMax 3D Mid (PM) (Planmeca USA, Inc. IL, USA), and to compare scans obtained from various scanners with the control (measured from Vernier caliper). This should help to identify the appropriate scanner for face scanning. A master face model was created and printed from polylactic acid using the resolution of 200 microns on x, y, and z axes and designed in Rhinoceros 3D modeling software (Rhino, Robert McNeel and Associates for Windows, Washington DC, USA). The face models were 3D scanned with four scanners, five times, according to the manufacturer’s recommendations; EinScan Pro (Shining 3D Tech. Co., Ltd. Hangzhou, China), EinScan Pro 2X Plus (Shining 3D Tech. Co., Ltd. Hangzhou, China) using Shining Software, iPhone X (Apple Store, Cupertino, CA, USA) using Bellus3D Face Application (Bellus3D, version 1.6.2, Bellus3D, Inc. Campbell, CA, USA), and Planmeca ProMax 3D Mid (PM) (Planmeca USA, Inc. IL, USA). Scan data files were saved as stereolithography (STL) files for the measurements. From the STL files, digital face models are created in the computer using Rhinoceros 3D modeling software (Rhino, Robert McNeel and Associates for Windows, Washington DC, USA). Various measurements were measured five times from the reference points in three axes (x, y, and z) using a digital Vernier caliper (VC) (Mitutoyo 150 mm Digital Caliper, Mitutoyo Co., Kanagawa, Japan), and the mean was calculated, which was used as the control. Measurements were measured on the digital face models of EP, EP+, IPX, and PM using Rhinoceros 3D modeling software (Rhino, Robert McNeel and Associates for Windows, Washington DC, USA). The descriptive statistics were done from SPSS version 20 (IBM Company, Chicago, USA). One-way ANOVA with post hoc using Scheffe was done to analyze the differences between the control and the scans (EP, EP+, IPX, and PM). The significance level was set at p = 0.05. EP+ showed the highest accuracy. EP showed medium accuracy and some lesser accuracy (accurate until 10 mm of length), but IPX and PM showed the least accuracy. EP+ showed accuracy in measuring the 2 mm of depth (diameter 6 mm). All other scanners (EP, IPX, and PM) showed less accuracy in measuring depth. Finally, the accuracy of an optical scan is dependent on the technology used by each scanner. It is recommended to use EP+ for face scanning.


2022 ◽  
Author(s):  
Iman Mohamad Sharaf

Abstract This study proposes a new perspective of the TOPSIS and VIKOR methods using the recently introduced spherical fuzzy sets (SFSs) to handle the vagueness in subjective data and the uncertainties in objective data simultaneously. When implementing these techniques using SFSs, two main problems might arise that can lead to incorrect results. Firstly, the reference points might change with the utilized score function. Secondly, the distance between reference points might not be the largest, as known, among the available ratings. To overcome these deficiencies and increase the robustness of these two methods, they are implemented without utilizing any reference points to minimize the effect of defuzzification and without measuring the distance to eliminate the effect of distance formulas. In the proposed methods, when using an SFS to express the performance of an alternative for a criterion, this SFS per se can be viewed as a measure of proximity to the aspired level. On the other hand, the conjugate of the SFS can be viewed as a measure of proximity to the ineffectual level. Two practical applications are presented to demonstrate the proposed techniques. The first example handles a warehouse location selection problem. The second example evaluates hydrogen storage systems for automobiles with different types of data (crisp, linguistic variables, type 1 fuzzy sets). These data are transformed to SFSs to provide a more comprehensive analysis. A comparative study is conducted with earlier versions of TOPSIS and VIKOR to explicate the adequacy of the proposed methods and the consistency of the results.


2013 ◽  
Vol 67 (2) ◽  
pp. 211-225 ◽  
Author(s):  
Xiaolin Gong ◽  
Tingting Qin

This paper addresses the issue of state estimation in the integration of a Strapdown Inertial Navigation System (SINS) and Global Positioning System (GPS), which is used for airborne earth observation positioning and orientation. For a nonlinear system, especially with large initial attitude errors, the performance of linear estimation approaches will degrade. In this paper a nonlinear error model based on angle errors is built, and a nonlinear estimation algorithm called the Central Difference Rauch-Tung-Striebel (R-T-S) Smoother (CDRTSS) is utilized in SINS/GPS integration post-processing. In this algorithm, the measurements are first processed by the forward Central Difference Kalman filter (CDKF) and then a separate backward smoothing pass is used to obtain the improved solution. The performance of this algorithm is compared with a similar smoother based on an extended Kalman filter known as ERTSS through Monte Carlo simulations and flight tests with a loaded SINS/GPS integrated system. Furthermore, a digital camera was used to verify the precision of practical applications in a check field with numerous reference points. All these validity checks demonstrate that CDRTSS is a better method and the work of this paper will offer a new approach for SINS/GPS integration for Synthetic Aperture Radar (SAR) and other airborne earth observation tasks.


2015 ◽  
Vol 2015 ◽  
pp. 1-16 ◽  
Author(s):  
Yulong Gao ◽  
Yanping Chen

To reduce the computational complexity and rest on less prior knowledge, energy-based spectrum sensing under nonreconstruction framework is studied. Compressed measurements are adopted directly to eliminate the effect of reconstruction error and high computational complexity caused by reconstruction algorithm of compressive sensing. Firstly, we summarize the conventional energy-based spectrum sensing methods. Next, the major effort is placed on obtaining the statistical characteristics of compressed measurements and its corresponding squared form, such as mean, variance, and the probability density function. And then, energy-based spectrum sensing under nonreconstruction framework is addressed and its performance is evaluated theoretically and experimentally. Simulations for the different parameters are performed to verify the performance of the presented algorithm. The theoretical analysis and simulation results reveal that the performance drops slightly less than that of conventional energy-normalization method and reconstruction-based spectrum sensing algorithm, but its computational complexity decreases remarkably, which is the first thing one should think about for practical applications. Accordingly, the presented method is reasonable and effective for fast detection in most cognitive scenarios.


1995 ◽  
Author(s):  
A. V. Frolov ◽  
G. A. Akimova ◽  
V. V. Mataibaev ◽  
M. P. Romanova ◽  
Yu. P. Seryikh

Author(s):  
Przemysław Andrzej Wałęga

Temporal reasoning constitutes one of the main topics within the field of Artificial Intelligence. Particularly interesting are interval-based methods, in which time intervals are treated as basic ontological objects, in opposite to point-based methods, where time-points are considered as basic. The former approach is more expressive and seems to be more appropriate for such applications as natural language analysis or real time processes verification. My research concerns the classical interval-based logic, namely Halpern-Shoham logic (HS). In particular, my investigation continues recently proposed search for well-behaved - i.e., expressive enough for practical applications and of low computational complexity - HS fragments obtained by imposing syntactical restrictions on the usage of propositional connectives in their languages.


2021 ◽  
Author(s):  
◽  
Seyed Reza Mir Alavi

<p>Communication is performed by transmitting signals through a medium. It is common that signals originating from different sources are mixed in the transport medium. The operation of separating source signals without prior information about the sources is referred to as blind source separation (BSS). Blind source separation for wireless sensor networks has recently received attention because of low cost and the easy coverage of large areas. Distributed processing is attractive as it is scalable and consumes low power. Existing distributed BSS algorithms either require a fully connected pattern of connectivity, to ensure the good performance, or require a high computational load at each sensor node, to enhance the scalability. This motivates us to develop distributed BSS algorithms that can be implemented over any arbitrary graph with fully shared computations and with good performance.  This thesis presents three studies on distributed algorithms. The first two studies are on existing distributed algorithms that are used in linearly constrained convex optimization problems, which are common in signal processing and machine learning. The studies are aimed at improving the algorithms in terms of computational complexity, communication cost, processors coordination and scalability. This makes them more suitable for implementation on sensor networks, thus forming a basis for the development of distributed BSS algorithms on sensor networks in our third study.  In the first study, we consider constrained problems in which the constraint includes a weighted sum of all the decision variables. By formulating a constrained dual problem associated to the original constrained problem, we were able to develop a distributed algorithm that can be run both synchronously and asynchronously on any arbitrary graph with lower communication cost than traditional distributed algorithms.  In the second study, we consider constrained problems in which the constraint is separable. By making use of the augmented Lagrangian function and splitting the dual variable (Lagrange multiplier) associated to each partial constraint, we were able to develop a distributed fully asynchronous algorithm with lower computational complexity than traditional distributed algorithms. The simplicity of the algorithm is the consequence of approximating the constraint on the equality of the decoupled dual variables. We also provide a measure of the inaccuracy in such an approximation on the optimal value of the primal objective function. Finally, in the third study, we investigate distributed processing solutions for BSS on sensor networks. We propose two distributed processing schemes for BSS that we refer to as scheme 1 and scheme 2. In scheme 1, each sensor node estimates one specific source signal while in scheme 2, by formulating a consensus optimization problem, each sensor node estimates all source signals in a fully shared computation manner. Our proposed algorithms carry the following features: low computational complexity, low power consumption, low data transmission rate, scalability and excellent performance over arbitrary graphs. Although all of our proposed algorithms share the aforementioned properties, each of them is superior in one or some of the features compared to the others. Comparative experimental results show that among all our proposed distributed BSS algorithms, a variant of scheme 1 performs best when all features are considered. This is achieved by making use of the concept of pairwise mutual information along with adding a sparsity assumption on the parameters of the model that is used in BSS.</p>


Sign in / Sign up

Export Citation Format

Share Document