high convergence rate
Recently Published Documents


TOTAL DOCUMENTS

61
(FIVE YEARS 31)

H-INDEX

6
(FIVE YEARS 3)

2021 ◽  
Vol 12 (1) ◽  
pp. 28
Author(s):  
Hafiz Muhammad Tayyab ◽  
Yaqoob Javed ◽  
Irfan Ullah ◽  
Abid Ali Dogar ◽  
Burhan Ahmed

A major problem in the photovoltaic (PV) system is to determine the maximum power point (MPP) and to overcome the limitations of environmental change. To resolve the limitation of different techniques with high convergence rate and less fluctuations, a hybrid model of fractional open circuit voltage is proposed. For partial shading, incremental conductance is used. The proposed technique is extremely useful, provides high efficiency, and takes less time to achieve the MPP. The tenacity of the proposed method has been checked using MATLAB/Simulink, which clearly shows that the proposed technique has high efficiency compared to other MPP tracking methods.


2021 ◽  
Vol 12 ◽  
Author(s):  
Chia-Wen Chen ◽  
Wen-Chung Wang ◽  
Magdalena Mo Ching Mok ◽  
Ronny Scherer

Compositional items – a form of forced-choice items – require respondents to allocate a fixed total number of points to a set of statements. To describe the responses to these items, the Thurstonian item response theory (IRT) model was developed. Despite its prominence, the model requires that items composed of parts of statements result in a factor loading matrix with full rank. Without this requirement, the model cannot be identified, and the latent trait estimates would be seriously biased. Besides, the estimation of the Thurstonian IRT model often results in convergence problems. To address these issues, this study developed a new version of the Thurstonian IRT model for analyzing compositional items – the lognormal ipsative model (LIM) – that would be sufficient for tests using items with all statements positively phrased and with equal factor loadings. We developed an online value test following Schwartz’s values theory using compositional items and collected response data from a sample size of N = 512 participants with ages from 13 to 51 years. The results showed that our LIM had an acceptable fit to the data, and that the reliabilities exceeded 0.85. A simulation study resulted in good parameter recovery, high convergence rate, and the sufficient precision of estimation in the various conditions of covariance matrices between traits, test lengths and sample sizes. Overall, our results indicate that the proposed model can overcome the problems of the Thurstonian IRT model when all statements are positively phrased and factor loadings are similar.


Author(s):  
Guofang Li ◽  
Gang Wang ◽  
Junfang Ni ◽  
Liang Li

In this study, an investigation on the free vibration of the beam with material properties and cross section varying arbitrarily along the axis direction is studied based on the so-called Spectro-Geometric Method. The cross-section area and second moment of area of the beam are both expanded into Fourier cosine series, which are mathematically capable of representing any variable cross section. The Young’s modulus, the mass density and the shear modulus varying along the lengthwise direction of the beam, are also expanded into Fourier cosine series. The translational displacement and rotation of cross section are expressed into the Fourier series by adding some polynomial functions which are used to handle the elastic boundary conditions with more accuracy and high convergence rate. According to Hamilton’s principle, the eigenvalues and the coefficients of the Fourier series can be obtained. Some examples are presented to validate the accuracy of this method and study the influence of the parameters on the vibration of the beam. The results show that the first four natural frequencies gradually decrease as the coefficient of the radius [Formula: see text] increases, and decreases as the gradient parameter n increases under clamped–clamped end supports. The stiffness of the functionally Timoshenko beam with arbitrary cross sections is variable compared with the uniform beam, which makes the vibration amplitude of the beam have different changes.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Muhammad Khalid ◽  
Abid Muhammad Khan ◽  
Muhammad Rauf ◽  
Muhammad Taha Jilani ◽  
Sheraz Afzal

The performance of time-domain channel estimation deteriorates due to the presence of Gaussian mixture model (GMM) noise, which results in high mean squared error (MSE) as a challenging issue. The performance of the estimator further decreases when the complexity of the estimator is high due to the high convergence rate. In this paper, an optimized channel estimation method is proposed with low complexity and high accuracy in the GMM environment. In this channel estimation, an improved Gauss-Seidel iterative method is utilized with a minimum number of iterations. The convergence rate of the Gauss-Seidel method is improved by estimating an appropriate initial guess value when no guard bands are used in the orthogonal frequency-division multiplexing (OFDM) symbol. Simulation results provide an acceptable MSE for GMM environments, up to the probability of 5% impulsive noise component. This paper also presents the design and implementation of the proposed estimator in the NEXYS-2 FPGA platform that provides resources allocation, reconfigurability, schematic, and the timing diagram for detailed insight.


Author(s):  
Ahmed I. Taloba , Et. al.

Clustering is a process of randomly selecting k-cluster centers also grouping the data around those centers. Issues of data clustering have recently received research attention and as such, a nature-based optimization algorithm called Black Hole (BH) has said to be suggested as an arrangement to data clustering issues. The BH as a metaheuristic which is elicited from public duplicates the black hole event in the universe, whereas circling arrangement in the hunt space addresses a solo star. Even though primordial BH has shown enhanced execution taking place standard datasets, it doesn't have investigation capacities yet plays out a fine local search. In this paper, another crossover metaheuristic reliant on the mix of BH algorithm as well as genetic algorithm suggested. Genetic algorithm represents its first part of the algorithm which prospects the search space and provides the initial positions for the stars. Then, the BH algorithm utilizes the search space and finds the best solution until the termination condition is reached. The proposed hybrid approach was estimated on synchronized nine popular standard functions where the outcomes indicated that the process generated enhanced outcome with regard to robustness compared to BH and the benchmarking algorithms in the study. Furthermore, it also revealed a high convergence rate which used six real datasets sourced of the UCI machine learning laboratory, indicating fine conduct of the hybrid algorithm on data clustering problems. Conclusively, the investigation showed the suitability of the suggested hybrid algorithm designed for resolving data clustering issues.


2021 ◽  
Author(s):  
Arun kumar ◽  
Mohammad Shabi Hashmi ◽  
Abdul Quaiyum Ansari ◽  
Sultangali Arzykulov

Abstract This paper proposes a haar algorithm with the phase wise flow for three distinct cases of fractional order calculus-based electromagnetic wave machine problem. The numerical solution to these programming problems was presented in tabular and graphic form using precise, approximate analysis and haar schema for comparative analysis. Convergence research was also carried out to validate the accuracy and efficacy of the system suggested. The proposed scheme is suited for the numerical solution of the addressed type of computational problem due to its uncomplicated and easy-to-implement, professional, fastness and high convergence rate.


2021 ◽  
Author(s):  
sepideh pajang ◽  
Nadaya Cubas ◽  
Laetitia Le-pourhiet ◽  
Eloise Bessiere ◽  
Jean Letouzey ◽  
...  

<p>Western Makran is one of the few subduction zones left with a largely unconstrained seismogenic potential. According to the sparse GPS stations, the subduction is accumulating some strain to be released during future earthquakes. Mechanical modelling is first used to retrieve the spatial variations of the frictional properties of the megathrust, and discuss its seismogenic potential. To do so, we first build a structural map along the Iranian part of the Oman Sea and investigate three N-S seismic profiles. The profiles are characterized by a long imbricated thrust zone that takes place at the front of the wedge. A diapiric zone of shallow origin lies in between the imbricated zone and the shore. Along the eastern and western shores, active listric normal faults root down to the megathrust. Eastern and western domains have developed similar deformation, with three zones of active faulting: the normal faults on shore, thrusts ahead of the mud diapirs, and the frontal thrusts. On the contrary, no normal faults are identified along the central domain, where a seamount is entering into subduction. From mechanical modelling, we show that along the eastern and western profiles, a transition from very low to extremely low friction is required to activate the large coastal normal fault. To propagate the deformation to the front, an increase of friction along the imbricated zone is necessary. These along-dip transitions could either be related to a transition from an aseismic to seismic behavior or the brittle-viscous transition.</p><p>To decipher, we run 2-D thermo-mechanical modelling incorporating temperature evolution, with a heat flow boundary condition. Our simulations are first calibrated to reproduce the heat flow estimates based on the BSR depth. Then the effects of the illite-smectite and brittle-viscous transitions on the deformation are investigated. The decrease in heat flow landward is due to the landward deepening of the oceanic plate and thickening of sediments of the accretionary wedge. Deformation starts at the rear of the model and migrates forming in-sequence, forward verging thrust sheets. The two brittle-viscous and illite-smectite transitions affect the topographic slope and friction. A reduction of friction due to the illite-smectite transition reduces the slope by normal faulting that does not appear in the brittle-viscous transition simulations. Therefore, the presence of normal faults could permit to distinguish viscously creeping segments from segments that deform seismically. As a consequence, the normal fault is most probably related to the presence of a seismic asperity, and the difference in deformation along strike would thus reveal the existence of two different patches, one along the eastern domain and a second along the western domain. Since no large earthquake has been historically reported and given the high convergence rate, a major earthquake will strike the Makran region. We suggest that the magnitude of this event will depend on the behavior of the Central region, and the ability of the earthquake to propagate from the eastern to the western asperity or the Pakistani Makran.</p>


Author(s):  
Diriba Kajela Geleta ◽  
Mukhdeep Singh Manshahia

In this chapter, the artificial bee colony (ABC) algorithm was applied to optimize hybrids of wind and solar renewable energy system. The main objective of this research is to minimize the total annual cost of the system by determining appropriate numbers of wind turbine, solar panel, and batteries, so that the desired load can be economically and reliably satisfied based on the given constraints. ABC is a recently proposed meta heuristic algorithm which is inspired by the intelligent behavior of honey bees such as searching for food source and collection and processing of nectar. Instead of gradient and Hessian matrix information, ABC uses stochastic rules to escape local optima and find the global optimal solutions. The proposed methodology was applied to this hybrid system by the help of MATLAB code and the results were discussed. Additionally, it is shown that ABC can be efficiently solve the optimum sizing real-world problems with high convergence rate and reliability. The result was compared with the results of PSO.


2020 ◽  
Vol 10 (24) ◽  
pp. 9117
Author(s):  
Nada Chakhim ◽  
Mohamed Louzar ◽  
Abdellah Lamnii ◽  
Mohammed Alaoui

Diffuse optical tomography (DOT) is an emerging modality that reconstructs the optical properties in a highly scattering medium from measured boundary data. One way to solve DOT and recover the quantities of interest is by an inverse problem approach, which requires the choice of an optimization algorithm for the iterative approximation of the solution. However, the well-established and proven fact of the no free lunch principle holds in general. This paper aims to compare the behavior of three gradient descent-based optimizers on solving the DOT inverse problem by running randomized simulation and analyzing the generated data in order to shade light on any significant difference—if existing at all—in performance among these optimizers in our specific context of DOT. The major practical problems when selecting or using an optimization algorithm in a production context for a DOT system is to be confident that the algorithm will have a high convergence rate to the true solution, reasonably fast speed and high quality of the reconstructed image in terms of good localization of the inclusions and good agreement with the true image. In this work, we harnessed carefully designed randomized simulations to tackle the practical problem of choosing the right optimizer with the right parameters in the context of practical DOT applications, and derived statistical results concerning rate of convergence, speed, and quality of image reconstruction. The statistical analysis performed on the generated data and the main results for convergence rate, reconstruction speed, and quality between three optimization algorithms are presented in the paper at hand.


2020 ◽  
pp. 208-217
Author(s):  
O.M. Khimich ◽  
◽  
V.A. Sydoruk ◽  
A.N. Nesterenko ◽  
◽  
...  

Systems of nonlinear equations often arise when modeling processes of different nature. These can be both independent problems describing physical processes and also problems arising at the intermediate stage of solving more complex mathematical problems. Usually, these are high-order tasks with the big count of un-knows, that better take into account the local features of the process or the things that are modeled. In addition, more accurate discrete models allow for more accurate solutions. Usually, the matrices of such problems have a sparse structure. Often the structure of sparse matrices is one of next: band, profile, block-diagonal with bordering, etc. In many cases, the matrices of the discrete problems are symmetric and positively defined or half-defined. The solution of systems of nonlinear equations is performed mainly by iterative methods based on the Newton method, which has a high convergence rate (quadratic) near the solution, provided that the initial approximation lies in the area of gravity of the solution. In this case, the method requires, at each iteration, to calculates the Jacobi matrix and to further solving systems of linear algebraic equations. As a consequence, the complexity of one iteration is. Using the parallel computations in the step of the solving of systems of linear algebraic equations greatly accelerates the process of finding the solution of systems of nonlinear equations. In the paper, a new method for solving systems of nonlinear high-order equations with the Jacobi block matrix is proposed. The basis of the new method is to combine the classical algorithm of the Newton method with an efficient small-tile algorithm for solving systems of linear equations with sparse matrices. The times of solving the systems of nonlinear equations of different orders on the nodes of the SKIT supercomputer are given.


Sign in / Sign up

Export Citation Format

Share Document