average running time
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 15)

H-INDEX

6
(FIVE YEARS 2)

2022 ◽  
Vol 2146 (1) ◽  
pp. 012037
Author(s):  
Ying Zou

Abstract Aiming at the problems of high complexity and low accuracy of visual depth map feature recognition, a graph recognition algorithm based on principal component direction depth gradient histogram (pca-hodg) is designed in this study. In order to obtain high-quality depth map, it is necessary to calculate the parallax of the visual image. At the same time, in order to obtain the quantized regional shape histogram, it is necessary to carry out edge detection and gradient calculation on the depth map, then reduce the dimension of the depth map combined with the principal component, and use the sliding window detection method to reduce the dimension again to realize the feature extraction of the depth map. The results show that compared with other algorithms, the pca-hodg algorithm designed in this study improves the average classification accuracy and significantly reduces the average running time. This shows that the algorithm can reduce the running time by reducing the dimension, extract the depth map features more accurately, and has good robustness.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Man He ◽  
Wan Sun ◽  
Naxian Sha

The study was intended to eliminate the noise in three-dimensional transvaginal ultrasound (3D-TVS) images and improve the diagnostic accuracy in intrauterine adhesion (IUA). The extreme learning machine (ELM) algorithm was introduced first for statement. One hundred and thirty cases of suspected IUA patients were taken as the research subjects. The denoising effects of ELM algorithm were evaluated in terms of mean square errors (MSE), peak signal-to-noise ratio (PSNR), and running time, and its diagnostic efficiency of IUA was identified from precise, specificity, and sensitivity. Furthermore, the support vector machine (SVM) algorithm was introduced for comparison. It was found that the MSE and PSNR of the ELM algorithm were 0.0021 and 64.5, respectively, and its average operation time was 11.22 ± 0.89s, that the MSE values of SVM algorithm and ELM algorithm were 0.0045 and 0.0021 and the PSNR values were 52.3 and 64.5, respectively, and that the average running time of SVM algorithm was 16.35 ± 1.33s, and the average running time of ELM algorithm was 11.22 ± 0.89s, superior to the SVM algorithm in denoising effects. Moreover, the ELM algorithm showed excellent diagnostic efficiency for patients with various degrees of IUA. In conclusion, ELM can effectively eliminate noise in 3D-TVS images and demonstrates excellent diagnostic efficiency on IUA, which is worthy of clinical application.


2021 ◽  
Vol 15 (4) ◽  
pp. 615-628
Author(s):  
Ferdinandus Mone ◽  
Justin Eduardo Simarmata

Making a class schedule becomes problem and takes a long time because of several obstacles such as the lack of lecture rooms, the lack of teaching staff, and the high of courses available in one semester. This study aims to apply genetic algorithms in making class schedules to facilitate the process of making class schedules. The method used is the waterfall method with the stages of the Software Development Life Cycle. The results of the application of genetics application show that the process of making course schedules can overcome the constraints of 1) space and time clashes, 2) lecturer conflicts, 3) Friday prayer times clashing, 4) there is a time when the lecturer wants for certain reasons, and 5) practicum in the laboratory room. By passing these constraints, the application of genetic algorithms in course scheduling is categorized as effective. Based on the results of running on 51 lecturers (51 chromosomes), the average running time 30 times in a row is 25.86 minutes so that the use of genetic algorithm applications in scheduling courses is efficient.


Author(s):  
Benjamin Hiller ◽  
René Saitenmacher ◽  
Tom Walther

AbstractWe study combinatorial structures in large-scale mixed-integer (nonlinear) programming problems arising in gas network optimization. We propose a preprocessing strategy exploiting the observation that a large part of the combinatorial complexity arises in certain subnetworks. Our approach analyzes these subnetworks and the combinatorial structure of the flows within these subnetworks in order to provide alternative models with a stronger combinatorial structure that can be exploited by off-the-shelve solvers. In particular, we consider the modeling of operation modes for complex compressor stations (i.e., ones with several in- or outlets) in gas networks. We propose a refined model that allows to precompute tighter bounds for each operation mode and a number of model variants based on the refined model exploiting these tighter bounds. We provide a procedure to obtain the refined model from the input data for the original model. This procedure is based on a nontrivial reduction of the graph representing the gas flow through the compressor station in an operation mode. We evaluate our model variants on reference benchmark data, showing that they reduce the average running time between 10% for easy instances and 46% for hard instances. Moreover, for three of four considered networks, the average number of search tree nodes is at least halved, showing the effectivity of our model variants to guide the solver’s search.


2021 ◽  
Vol 2021 ◽  
pp. 1-6
Author(s):  
Yixin Zhou ◽  
Zhen Guo

With the advent of the era of big data (BD), people’’s living standards and lifestyle have been greatly changed, and people’s requirements for the service level of the service industry are becoming higher and higher. The personalized needs of customers and private customization have become the hot issues of current research. The service industry is the core enterprise of the service industry. Optimizing the service industry supply network and reasonably allocating the tasks are the focus of the research at home and abroad. Under the background of BD, this paper takes the optimization of service industry supply network as the research object and studies the task allocation optimization of service industry supply network based on the analysis of customers’ personalized demand and user behavior. This paper optimizes the supply chain network of service industry based on genetic algorithm (GA), designs genetic operator, effectively avoids the premature of the algorithm, and improves the operation efficiency of the algorithm. The experimental results show that when m = 8 and n = 40, the average running time of the improved GA is 54.1 s. The network optimization running time of the algorithm used in this paper is very fast, and the stability is also higher.


2021 ◽  
Vol 13 (3) ◽  
pp. 11-22
Author(s):  
Kasliono ◽  
◽  
Suprapto ◽  
Faizal Makhrus

Traffic is a medium to move from one point to another. Therefore, the role of traffic is very important to support vehicle mobility. If congestion occurs, mobility will be hampered so that it gives influence to other sectors such as financial, air pollution and traffic violations. This study aims to create a model to predict vehicle queue at the traffic lights when its status is red. The prediction is conducted by using Neural Network with Extreme Learning Machine method to predict the length of the vehicle queue, and Correlation Analysis was used to measure the correlation between the connected roads. The conducted experiments use data of the length of the vehicle queue at the traffic lights which was obtained from DISHUB (Transportation Bureau) DI Yogyakarta. Several experiments were carried out to determine the optimum prediction model of vehicle queue length. The experiments found that the optimum model had an average MAPE value of 15.5882% and an average Running Time of 5.2226 seconds.


Mathematics ◽  
2021 ◽  
Vol 9 (11) ◽  
pp. 1306
Author(s):  
Elsayed Badr ◽  
Sultan Almotairi ◽  
Abdallah El Ghamry

In this paper, we propose a novel blended algorithm that has the advantages of the trisection method and the false position method. Numerical results indicate that the proposed algorithm outperforms the secant, the trisection, the Newton–Raphson, the bisection and the regula falsi methods, as well as the hybrid of the last two methods proposed by Sabharwal, with regard to the number of iterations and the average running time.


Metals ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 388 ◽  
Author(s):  
Shuai Wang ◽  
Xiaojun Xia ◽  
Lanqing Ye ◽  
Binbin Yang

Automatic detection of steel surface defects is very important for product quality control in the steel industry. However, the traditional method cannot be well applied in the production line, because of its low accuracy and slow running speed. The current, popular algorithm (based on deep learning) also has the problem of low accuracy, and there is still a lot of room for improvement. This paper proposes a method combining improved ResNet50 and enhanced faster region convolutional neural networks (faster R-CNN) to reduce the average running time and improve the accuracy. Firstly, the image input into the improved ResNet50 model, which add the deformable revolution network (DCN) and improved cutout to classify the sample with defects and without defects. If the probability of having a defect is less than 0.3, the algorithm directly outputs the sample without defects. Otherwise, the samples are further input into the improved faster R-CNN, which adds spatial pyramid pooling (SPP), enhanced feature pyramid networks (FPN), and matrix NMS. The final output is the location and classification of the defect in the sample or without defect in the sample. By analyzing the data set obtained in the real factory environment, the accuracy of this method can reach 98.2%. At the same time, the average running time is faster than other models.


2020 ◽  
Vol 4 (6) ◽  
pp. 1190-1197
Author(s):  
Taufiq Odhi Dwi Putra ◽  
Wisnu Widiarto ◽  
Wiharto

Load balancing is one of the main parts of scheduling Grid resources. One of the load balancing models on Grid resources is the hierarchical model. This model has the advantage that it requires minimal communication costs between one resource and another. The PLBA load balancing algorithm uses a hierarchical model with dynamically obtained threshold values, so that it can adjust conditions at a time, both the state of the resource, the state of the computer network, and the state of the recipient or client. PVM3 is a software system capable of optimizing heterogeneous resources, so that resources can work in parallel. Resources can also complete tasks well, even though they are very large and complex tasks. This research has implemented the PLBA load balancing algorithm, with the aim of optimizing Grid resources. This research has also developed the PLBA load balancing algorithm by changing the arguments for NPEList, so that resources can be grouped more optimally. The PLBA load balancing algorithm has been successfully developed by modifying the arguments for NPEList, so that the running time required to complete the given tasks is shorter, because resources can be grouped more optimally. This has been shown by the shorter average running time when using the modified NPEList argument (0.75 * threshold1 <= ALCi <= 1.25 * threshold1) is shorter, than using the NPEList argument in previous research (ALCi = threshold1). Comparison of the average running time has been obtained as follows : (82513.63740 : 67837.71720); (63869.92450 : 50722.17210); (858,96710 : 207,33680); (321.88000 : 126.89100); (768.54560 : 468.27190); (780.22770 : 279.43730).


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Ali Arab ◽  
Betty Chinda ◽  
George Medvedev ◽  
William Siu ◽  
Hui Guo ◽  
...  

Abstract This project aimed to develop and evaluate a fast and fully-automated deep-learning method applying convolutional neural networks with deep supervision (CNN-DS) for accurate hematoma segmentation and volume quantification in computed tomography (CT) scans. Non-contrast whole-head CT scans of 55 patients with hemorrhagic stroke were used. Individual scans were standardized to 64 axial slices of 128 × 128 voxels. Each voxel was annotated independently by experienced raters, generating a binary label of hematoma versus normal brain tissue based on majority voting. The dataset was split randomly into training (n = 45) and testing (n = 10) subsets. A CNN-DS model was built applying the training data and examined using the testing data. Performance of the CNN-DS solution was compared with three previously established methods. The CNN-DS achieved a Dice coefficient score of 0.84 ± 0.06 and recall of 0.83 ± 0.07, higher than patch-wise U-Net (< 0.76). CNN-DS average running time of 0.74 ± 0.07 s was faster than PItcHPERFeCT (> 1412 s) and slice-based U-Net (> 12 s). Comparable interrater agreement rates were observed between “method-human” vs. “human–human” (Cohen’s kappa coefficients > 0.82). The fully automated CNN-DS approach demonstrated expert-level accuracy in fast segmentation and quantification of hematoma, substantially improving over previous methods. Further research is warranted to test the CNN-DS solution as a software tool in clinical settings for effective stroke management.


Sign in / Sign up

Export Citation Format

Share Document