average computation time
Recently Published Documents


TOTAL DOCUMENTS

32
(FIVE YEARS 11)

H-INDEX

5
(FIVE YEARS 1)

2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Hongwei Sun ◽  
Jiu Wang ◽  
Zhongwen Zhang ◽  
Naibao Hu ◽  
Tong Wang

High dimensionality and noise have made it difficult to detect related biomarkers in omics data. Through previous study, penalized maximum trimmed likelihood estimation is effective in identifying mislabeled samples in high-dimensional data with mislabeled error. However, the algorithm commonly used in these studies is the concentration step (C-step), and the C-step algorithm that is applied to robust penalized regression does not ensure that the criterion function is gradually optimized iteratively, because the regularized parameters change during the iteration. This makes the C-step algorithm runs very slowly, especially when dealing with high-dimensional omics data. The AR-Cstep (C-step combined with an acceptance-rejection scheme) algorithm is proposed. In simulation experiments, the AR-Cstep algorithm converged faster (the average computation time was only 2% of that of the C-step algorithm) and was more accurate in terms of variable selection and outlier identification than the C-step algorithm. The two algorithms were further compared on triple negative breast cancer (TNBC) RNA-seq data. AR-Cstep can solve the problem of the C-step not converging and ensures that the iterative process is in the direction that improves criterion function. As an improvement of the C-step algorithm, the AR-Cstep algorithm can be extended to other robust models with regularized parameters.


2021 ◽  
pp. 1-26
Author(s):  
Barbora Hudcová ◽  
Tomáš Mikolov

Abstract In order to develop systems capable of artificial evolution, we need to identify which systems can produce complex behavior. We present a novel classification method applicable to any class of deterministic discrete space and time dynamical systems. The method is based on classifying the asymptotic behavior of the average computation time in a given system before entering a loop. We were able to identify a critical region of behavior that corresponds to a phase transition from ordered behavior to chaos across various classes of dynamical systems. To show that our approach can be applied to many different computational systems, we demonstrate the results of classifying cellular automata, Turing machines, and random Boolean networks. Further, we use this method to classify 2D cellular automata to automatically find those with interesting, complex dynamics. We believe that our work can be used to design systems in which complex structures emerge. Also, it can be used to compare various versions of existing attempts to model open-ended evolution (Channon, 2006; Ofria & Wilke, 2004; Ray, 1991).


2021 ◽  
Vol 13 (2) ◽  
pp. 31-37
Author(s):  
Jangkung Raharjo ◽  
Hermagasantos Zein ◽  
Adi Soeprijanto ◽  
Kharisma Bani Adam

There are some problems in optimization that cannot be derived mathematically. Various methods have been developed to solve the optimization problem with various functional forms, whether differentiated or not, to overcome the problem, which are known as artificial methods such as artificial neural networks, particle swarm optimization, and genetic algorithms. In the literature, it is said that there is an artificial method that frequently falls to the minimum local solution. The local minimum results are proof that the artificial method is not accurate. This paper proposes the Large to Small Area Technique for power system optimization,  which works based on reducing feasible areas. This method can work accurately, which that never violates all constraints in reaching the optimal point. However, to conclude that this method is superior to others, logical arguments and tests with mathematical simulations are needed. This proposed method has been tested with 24 target points using ten functions consisting of a quadratic function and a first-order function. The results showed that this method has an average accuracy of 99.97% and an average computation time of 62 seconds. The proposed technique can be an alternative in solving the economic dispatch problem in the power system.


PLoS ONE ◽  
2021 ◽  
Vol 16 (4) ◽  
pp. e0249436 ◽  
Author(s):  
Shahbaz Khan ◽  
Muhammad Tufail ◽  
Muhammad Tahir Khan ◽  
Zubair Ahmad Khan ◽  
Javaid Iqbal ◽  
...  

Agricultural production is vital for the stability of the country’s economy. Controlling weed infestation through agrochemicals is necessary for increasing crop productivity. However, its excessive use has severe repercussions on the environment (damaging the ecosystem) and the human operators exposed to it. The use of Unmanned Aerial Vehicles (UAVs) has been proposed by several authors in the literature for performing the desired spraying and is considered safer and more precise than the conventional methods. Therefore, the study’s objective was to develop an accurate real-time recognition system of spraying areas for UAVs, which is of utmost importance for UAV-based sprayers. A two-step target recognition system was developed by using deep learning for the images collected from a UAV. Agriculture cropland of coriander was considered for building a classifier for recognizing spraying areas. The developed deep learning system achieved an average F1 score of 0.955, while the classifier recognition average computation time was 3.68 ms. The developed deep learning system can be deployed in real-time to UAV-based sprayers for accurate spraying.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Kangkang He ◽  
Qi Cao ◽  
Gang Ren ◽  
Dawei Li ◽  
Shuichao Zhang

Map matching can provide useful traffic information by aligning the observed trajectories of vehicles with the road network on a digital map. It has an essential role in many advanced intelligent traffic systems (ITSs). Unfortunately, almost all current map-matching approaches were developed for GPS trajectories generated by probe sensors mounted in a few vehicles and cannot deal with the trajectories of massive vehicle samples recorded by fixed sensors, such as camera detectors. In this paper, we propose a novel map-matching model termed Fixed-MM, which is designed specifically for fixed sensor data. Based on two key observations from real-world data, Fixed-MM considers (1) the utility of each path and (2) the travel time constraint to match the trajectories of fixed sensor data to a specific path. Meanwhile, with the laws derived from the distribution of GPS trajectories, a path generation algorithm was developed to search for candidates. The proposed Fixed-MM was examined with field-test data. The experimental results show that Fixed-MM outperforms two types of classical map-matching algorithms regarding accuracy and efficiency when fixed sensor data are used. The proposed Fixed-MM can identify 68.38% of the links correctly, even when the spatial gap between the sensor pair is increased to five kilometers. The average computation time spent by Fixed-MM on one point is only 0.067 s, and we argue that the proposed method can be used online for many real-time ITS applications.


Author(s):  
Yun Meng ◽  
Shaojun Zhu ◽  
Bangquan Liu ◽  
Dechao Sun ◽  
Li Liu ◽  
...  

Introduction: Shape segmentation is a fundamental problem of computer graphics and geometric modeling. Although the existence segmentation algorithms of shapes have been widely studied in mathematics community, little progress has been made on how to compute them on polygonal surfaces interactively using geodesic loops. Method: We compute the geodesic distance fields with improved Fast March Method (FMM) proposed by Xin and Wang. We propose a new algorithm to compute geodesic loops over a triangulate surface and a new interactive shape segmentation manner on triangulate surface. Result: The average computation time on 50K vertices model is less than 0.08s. Discussion: In the future, we will use an accurate geodesic algorithm and parallel computing techniques to improve our algorithm to obtain better smooth geodesic loop. Conclusion: A large number of experimental results show that the algorithm proposed in this paper can effectively achieve high precision geodesic loop paths, and our method can also be used to interactive shape segmentation in real time.


10.2196/15917 ◽  
2020 ◽  
Vol 6 (2) ◽  
pp. e15917
Author(s):  
Tigran Avoundjian ◽  
Julia C Dombrowski ◽  
Matthew R Golden ◽  
James P Hughes ◽  
Brandon L Guthrie ◽  
...  

Background Many public health departments use record linkage between surveillance data and external data sources to inform public health interventions. However, little guidance is available to inform these activities, and many health departments rely on deterministic algorithms that may miss many true matches. In the context of public health action, these missed matches lead to missed opportunities to deliver interventions and may exacerbate existing health inequities. Objective This study aimed to compare the performance of record linkage algorithms commonly used in public health practice. Methods We compared five deterministic (exact, Stenger, Ocampo 1, Ocampo 2, and Bosh) and two probabilistic record linkage algorithms (fastLink and beta record linkage [BRL]) using simulations and a real-world scenario. We simulated pairs of datasets with varying numbers of errors per record and the number of matching records between the two datasets (ie, overlap). We matched the datasets using each algorithm and calculated their recall (ie, sensitivity, the proportion of true matches identified by the algorithm) and precision (ie, positive predictive value, the proportion of matches identified by the algorithm that were true matches). We estimated the average computation time by performing a match with each algorithm 20 times while varying the size of the datasets being matched. In a real-world scenario, HIV and sexually transmitted disease surveillance data from King County, Washington, were matched to identify people living with HIV who had a syphilis diagnosis in 2017. We calculated the recall and precision of each algorithm compared with a composite standard based on the agreement in matching decisions across all the algorithms and manual review. Results In simulations, BRL and fastLink maintained a high recall at nearly all data quality levels, while being comparable with deterministic algorithms in terms of precision. Deterministic algorithms typically failed to identify matches in scenarios with low data quality. All the deterministic algorithms had a shorter average computation time than the probabilistic algorithms. BRL had the slowest overall computation time (14 min when both datasets contained 2000 records). In the real-world scenario, BRL had the lowest trade-off between recall (309/309, 100.0%) and precision (309/312, 99.0%). Conclusions Probabilistic record linkage algorithms maximize the number of true matches identified, reducing gaps in the coverage of interventions and maximizing the reach of public health action.


2020 ◽  
pp. 164-170
Author(s):  
Suha Dh. Athab ◽  
Nassir H. Selman

Optic Disc (OD) localization is a basic step for the screening, identification and appreciation of the risk of diverse ophthalmic pathologies such as glaucoma and diabetic retinopathy.In fact, the fundamental step towards an exact OD segmentation process is the success of OD localization. This paper proposes a fully automatic procedure for OD localization based on two of the OD most relevant features  of high-intensity value and vasculature convergence. Merging ofthese two features renders the proposed method capable of localizing the OD within the variously complicated environments such as the faint disc boundary, unbalanced shading, and the existence of retinal pathologies like cotton wall and exudates,which usually share the same color and structure with the OD. To demonstrate the robustness, reliability and broad applicability of the proposed approach,we tested 1614 images from publically available datasets, including Messidor (1200 images), TheStandard Diabetic,Retinopathy Database (DIARETDB0 ,130 images), Digital Retinal,Images for Optic Nerve,Segmentation (DRIONS ,110 images), TheStandard Diabetic,Retinopathy Database (DIARETDB1,89 images),High,Resolution Fundus (HRF,45 images),and Digital,Retinal Image for Vessels,Extraction (DRIVE,40 images). The method successfully localized 1599 images and failed in 15 images, with an average success rate of 99.07% and an average computation time of 0.5 second per image.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Chenxi Yang ◽  
Negar Tavassolian ◽  
Wassim M. Haddad ◽  
James M. Bailey ◽  
Behnood Gholami

Abstract This paper introduces a novel framework for fast parameter identification of personalized pharmacokinetic problems. Given one sample observation of a new subject, the framework predicts the parameters of the subject based on prior knowledge from a pharmacokinetic database. The feasibility of this framework was demonstrated by developing a new algorithm based on the Cluster Newton method, namely the constrained Cluster Newton method, where the initial points of the parameters are constrained by the database. The algorithm was tested with the compartmental model of propofol on a database of 59 subjects. The average overall absolute percentage error based on constrained Cluster Newton method is 12.10% with the threshold approach, and 13.42% with the nearest-neighbor approach. The average computation time of one estimation is 13.10 seconds. Using parallel computing, the average computation time is reduced to 1.54 seconds, achieved with 12 parallel workers. The results suggest that the proposed framework can effectively improve the prediction accuracy of the pharmacokinetic parameters with limited observations in comparison to the conventional methods. Computation cost analyses indicate that the proposed framework can take advantage of parallel computing and provide solutions within practical response times, leading to fast and accurate parameter identification of pharmacokinetic problems.


Author(s):  
Zinah Jaffar Mohammed Ameen

Modern handheld devices such as smart phones become progressively wide spread in therecent years. However, some applications allow users to perform tasks resilientlywhich are used to be performed by personal computer (PC), laptop etc. In order to increase students' interactive participation in learning, an Android application provides a new technique of developing a test or quiz using smart devices. This paper implementsamobile quiz application based on face recognition as an authentication process to ascertain students' identity. The authentication process was implemented in two steps face detection using Mobile Vision APIs and face recognition using aSpeed Up Robust Features (SURF) algorithm. Questions are loaded from the database server within the Wi-Fi network.All questions and answers besidesa timer are configured by an administrator. The application was tested in a classroom with seven students and the achieved recognition rate was 85%, with a total average computation time 8.816 s per user login.


Sign in / Sign up

Export Citation Format

Share Document