computational time
Recently Published Documents


TOTAL DOCUMENTS

3807
(FIVE YEARS 1814)

H-INDEX

45
(FIVE YEARS 13)

Robotics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 16
Author(s):  
Matteo Bottin ◽  
Giovanni Boschetti ◽  
Giulio Rosati

Industrial robot applications should be designed to allow the robot to provide the best performance for increasing throughput. In this regard, both trajectory and task order optimization are crucial, since they can heavily impact cycle time. Moreover, it is very common for a robotic application to be kinematically or functionally redundant so that multiple arm configurations may fulfill the same task at the working points. In this context, even if the working cycle is composed of a small number of points, the number of possible sequences can be very high, so that the robot programmer usually cannot evaluate them all to obtain the shortest possible cycle time. One of the most well-known problems used to define the optimal task order is the Travelling Salesman Problem (TSP), but in its original formulation, it does not allow to consider different robot configurations at the same working point. This paper aims at overcoming TSP limitations by adding some mathematical and conceptual constraints to the problem. With such improvements, TSP can be used successfully to optimize the cycle time of industrial robotic tasks where multiple configurations are allowed at the working points. Simulation and experimental results are presented to assess how cost (cycle time) and computational time are influenced by the proposed implementation.


Mining ◽  
2022 ◽  
Vol 2 (1) ◽  
pp. 32-51
Author(s):  
Devendra Joshi ◽  
Amol Paithankar ◽  
Snehamoy Chatterjee ◽  
Sk Md Equeenuddin

Open pit mine production scheduling is a computationally expensive large-scale mixed-integer linear programming problem. This research develops a computationally efficient algorithm to solve open pit production scheduling problems under uncertain geological parameters. The proposed solution approach for production scheduling is a two-stage process. The stochastic production scheduling problem is iteratively solved in the first stage after relaxing resource constraints using a parametric graph closure algorithm. Finally, the branch-and-cut algorithm is applied to respect the resource constraints, which might be violated during the first stage of the algorithm. Six small-scale production scheduling problems from iron and copper mines were used to validate the proposed stochastic production scheduling model. The results demonstrated that the proposed method could significantly improve the computational time with a reasonable optimality gap (the maximum gap is 4%). In addition, the proposed stochastic method is tested using industrial-scale copper data and compared with its deterministic model. The results show that the net present value for the stochastic model improved by 6% compared to the deterministic model.


2022 ◽  
Vol 14 (2) ◽  
pp. 367
Author(s):  
Zhen Zheng ◽  
Bingting Zha ◽  
Yu Zhou ◽  
Jinbo Huang ◽  
Youshi Xuchen ◽  
...  

This paper proposes a single-stage adaptive multi-scale noise filtering algorithm for point clouds, based on feature information, which aims to mitigate the fact that the current laser point cloud noise filtering algorithm has difficulty quickly completing the single-stage adaptive filtering of multi-scale noise. The feature information from each point of the point cloud is obtained based on the efficient k-dimensional (k-d) tree data structure and amended normal vector estimation methods, and the adaptive threshold is used to divide the point cloud into large-scale noise, a feature-rich region, and a flat region to reduce the computational time. The large-scale noise is removed directly, the feature-rich and flat regions are filtered via improved bilateral filtering algorithm and weighted average filtering algorithm based on grey relational analysis, respectively. Simulation results show that the proposed algorithm performs better than the state-of-art comparison algorithms. It was, thus, verified that the algorithm proposed in this paper can quickly and adaptively (i) filter out large-scale noise, (ii) smooth small-scale noise, and (iii) effectively maintain the geometric features of the point cloud. The developed algorithm provides research thought for filtering pre-processing methods applicable in 3D measurements, remote sensing, and target recognition based on point clouds.


Computation ◽  
2022 ◽  
Vol 10 (1) ◽  
pp. 5
Author(s):  
Vasileios K. Mantzaroudis ◽  
Dimitrios G. Stamatelos

When catastrophic failure phenomena in aircraft structures, such as debonding, are numerically analyzed during their design process in the frame of “Damage Tolerance” philosophy, extreme requirements in terms of time and computational resources arise. Here, a decrease in these requirements is achieved by developing a numerical model that efficiently treats the debonding phenomena that occur due to the buckling behavior of composite stiffened panels under compressive loads. The Finite Element (FE) models developed in the ANSYS© software (Canonsburg, PA, USA) are calibrated and validated by using published experimental and numerical results of single-stringer compression specimens (SSCS). Different model features, such as the type of the element used (solid and solid shell) and Cohesive Zone Modeling (CZM) parameters are examined for their impact on the efficiency of the model regarding the accuracy versus computational cost. It is proved that a significant reduction in computational time is achieved, and the accuracy is not compromised when the proposed FE model is adopted. The outcome of the present work leads to guidelines for the development of FE models of stiffened panels, accurately predicting the buckling and post-buckling behavior leading to debonding phenomena, with minimized computational and time cost. The methodology is proved to be a tool for the generation of a universal parametric numerical model for the analysis of debonding phenomena of any stiffened panel configuration by modifying the corresponding geometric, material and damage properties.


2022 ◽  
Vol 9 ◽  
Author(s):  
Bangyu Wu ◽  
Wenzhuo Tan ◽  
Wenhao Xu ◽  
Bo Li

The large computational memory requirement is an important issue in 3D large-scale wave modeling, especially for GPU calculation. Based on the observation that wave propagation velocity tends to gradually increase with depth, we propose a 3D trapezoid-grid finite-difference time-domain (FDTD) method to achieve the reduction of memory usage without a significant increase of computational time or a decrease of modeling accuracy. It adopts the size-increasing trapezoid-grid mesh to fit the increasing trend of seismic wave velocity in depth, which can significantly reduce the oversampling in the high-velocity region. The trapezoid coordinate transformation is used to alleviate the difficulty of processing ununiform grids. We derive the 3D acoustic equation in the new trapezoid coordinate system and adopt the corresponding trapezoid-grid convolutional perfectly matched layer (CPML) absorbing boundary condition to eliminate the artificial boundary reflection. Stability analysis is given to generate stable modeling results. Numerical tests on the 3D homogenous model verify the effectiveness of our method and the trapezoid-grid CPML absorbing boundary condition, while numerical tests on the SEG/EAGE overthrust model indicate that for comparable computational time and accuracy, our method can achieve about 50% reduction on memory usage compared with those on the uniform-grid FDTD method.


2022 ◽  
Vol 2022 ◽  
pp. 1-12
Author(s):  
K. Ramash Kumar ◽  
T. S. Anandhi ◽  
B. Vijayakrishna ◽  
S. Balakumar

This paper studies on a new Hybrid Posicast Control (HPC) for Fundamental KY Boost Converter (FKYBC) worked in Continuous Current Mode (CCM). Posicast is a feed-forward compensator. It reduces the overshoot in the step result of the flippantly damped plant. But the conventional controller approach is sensitive owing to the changes in the natural frequency. So, as to reduce this undesirable sensitivity and load potential control of FKYBC, a HPC is designed in this article. Structure of HPC is posicast with feedback loop. The independent computational time delay is the main design function of the posicast. The enactment of the FKYBC with HPC is confirmed at various operating regions by making the MATLAB/Simulink and experimental model. The posicast function values are implemented in Arduino Uno-ATmega328P microcontroller. The results of new HPC have produced minimal noise in control signal in comparison with traditional PID control.


Author(s):  
Karn Moonsri ◽  
Kanchana Sethanan ◽  
Kongkidakhon Worasan

Outbound logistics is a crucial field of logistics management. This study considers a planning distribution for the poultry industry in Thailand. The goal of the study is to minimize the transportation cost for the multi-depot vehicle-routing problem (MDVRP). A novel enhanced differential evolution algorithm (RI-DE) is developed based on a new re-initialization mutation formula and a local search function. A mixed-integer programming formulation is presented in order to measure the performance of a heuristic with GA, PSO, and DE for small-sized instances. For large-sized instances, RI-DE is compared to the traditional DE algorithm for solving the MDVRP using published benchmark instances. The results demonstrate that RI-DE obtained a near-optimal solution of 99.03% and outperformed the traditional DE algorithm with a 2.53% relative improvement, not only in terms of solution performance, but also in terms of computational time.


2022 ◽  
Vol 90 (2) ◽  
Author(s):  
Edward Laughton ◽  
Vidhi Zala ◽  
Akil Narayan ◽  
Robert M. Kirby ◽  
David Moxey

AbstractAs the use of spectral/hp element methods, and high-order finite element methods in general, continues to spread, community efforts to create efficient, optimized algorithms associated with fundamental high-order operations have grown. Core tasks such as solution expansion evaluation at quadrature points, stiffness and mass matrix generation, and matrix assembly have received tremendous attention. With the expansion of the types of problems to which high-order methods are applied, and correspondingly the growth in types of numerical tasks accomplished through high-order methods, the number and types of these core operations broaden. This work focuses on solution expansion evaluation at arbitrary points within an element. This operation is core to many postprocessing applications such as evaluation of streamlines and pathlines, as well as to field projection techniques such as mortaring. We expand barycentric interpolation techniques developed on an interval to 2D (triangles and quadrilaterals) and 3D (tetrahedra, prisms, pyramids, and hexahedra) spectral/hp element methods. We provide efficient algorithms for their implementations, and demonstrate their effectiveness using the spectral/hp element library Nektar++ by running a series of baseline evaluations against the ‘standard’ Lagrangian method, where an interpolation matrix is generated and matrix-multiplication applied to evaluate a point at a given location. We present results from a rigorous series of benchmarking tests for a variety of element shapes, polynomial orders and dimensions. We show that when the point of interest is to be repeatedly evaluated, the barycentric method performs at worst $$50\%$$ 50 % slower, when compared to a cached matrix evaluation. However, when the point of interest changes repeatedly so that the interpolation matrix must be regenerated in the ‘standard’ approach, the barycentric method yields far greater performance, with a minimum speedup factor of $$7\times $$ 7 × . Furthermore, when derivatives of the solution evaluation are also required, the barycentric method in general slightly outperforms the cached interpolation matrix method across all elements and orders, with an up to $$30\%$$ 30 % speedup. Finally we investigate a real-world example of scalar transport using a non-conformal discontinuous Galerkin simulation, in which we observe around $$6\times $$ 6 × speedup in computational time for the barycentric method compared to the matrix-based approach. We also explore the complexity of both interpolation methods and show that the barycentric interpolation method requires $${\mathcal {O}}(k)$$ O ( k ) storage compared to a best case space complexity of $${\mathcal {O}}(k^2)$$ O ( k 2 ) for the Lagrangian interpolation matrix method.


Author(s):  
Nikita Doikov ◽  
Yurii Nesterov

AbstractIn this paper, we develop new affine-invariant algorithms for solving composite convex minimization problems with bounded domain. We present a general framework of Contracting-Point methods, which solve at each iteration an auxiliary subproblem restricting the smooth part of the objective function onto contraction of the initial domain. This framework provides us with a systematic way for developing optimization methods of different order, endowed with the global complexity bounds. We show that using an appropriate affine-invariant smoothness condition, it is possible to implement one iteration of the Contracting-Point method by one step of the pure tensor method of degree $$p \ge 1$$ p ≥ 1 . The resulting global rate of convergence in functional residual is then $${\mathcal {O}}(1 / k^p)$$ O ( 1 / k p ) , where k is the iteration counter. It is important that all constants in our bounds are affine-invariant. For $$p = 1$$ p = 1 , our scheme recovers well-known Frank–Wolfe algorithm, providing it with a new interpretation by a general perspective of tensor methods. Finally, within our framework, we present efficient implementation and total complexity analysis of the inexact second-order scheme $$(p = 2)$$ ( p = 2 ) , called Contracting Newton method. It can be seen as a proper implementation of the trust-region idea. Preliminary numerical results confirm its good practical performance both in the number of iterations, and in computational time.


2022 ◽  
Author(s):  
Alexandre Perez-Lebel ◽  
Gaël Varoquaux ◽  
Marine Le Morvan ◽  
Julie Josse ◽  
Jean-Baptiste Poline

BACKGROUND As databases grow larger, it becomes harder to fully control their collection, and they frequently come with missing values: incomplete observations. These large databases are well suited to train machine-learning models, for instance for forecasting or to extract biomarkers in biomedical settings. Such predictive approaches can use discriminative --rather than generative-- modeling, and thus open the door to new missing-values strategies. Yet existing empirical evaluations of strategies to handle missing values have focused on inferential statistics. RESULTS Here we conduct a systematic benchmark of missing-values strategies in predictive models with a focus on large health databases: four electronic health record datasets, a population brain imaging one, a health survey and two intensive care ones. Using gradient-boosted trees, we compare native support for missing values with simple and state-of-the-art imputation prior to learning. We investigate prediction accuracy and computational time. For prediction after imputation, we find that adding an indicator to express which values have been imputed is important, suggesting that the data are missing not at random. Elaborate missing values imputation can improve prediction compared to simple strategies but requires longer computational time on large data. Learning trees that model missing values --with missing incorporated attribute-- leads to robust, fast, and well-performing predictive modeling. CONCLUSIONS Native support for missing values in supervised machine learning predicts better than state-of-the-art imputation with much less computational cost. When using imputation, it is important to add indicator columns expressing which values have been imputed.


Sign in / Sign up

Export Citation Format

Share Document