scholarly journals METHOD FOR OPTIMIZING OF MOBILE ROBOT TRAJECTORY IN REPELLER SOURCES FIELD

Author(s):  
Mikhail Medvedev ◽  
Vladimir Kostjukov ◽  
Viacheslav Pshikhopov

The article discusses the procedure for correcting the trajectory of a robotic platform (RTP) on a plane in order to reduce the probability of its defeat/detection in the field of a finite number of repeller sources. Each of these sources is described by a mathematical model of some factor of counteraction to the RTP. This procedure is based, on the one hand, on the concept of a characteristic probability function of a system of repeller sources, which allows us to assess the degree of influence of these sources on the moving RTP. From this concept follows the probability of its successful completion used here as a criterion for optimizing the target trajectory. On the other hand, this procedure is based on solving local optimization problems that allow you to correct individual sections of the initial trajectory, taking into account the location of specific repeller sources with specified parameters in their vicinity. Each of these sources is characterized by the potential, frequency of impact, radius of action, and parameters of the field decay. The trajectory is adjusted iteratively and takes into account the target value of the probability of passing. The main restriction on the variation of the original trajectory is the maximum allowable deviation of the changed trajectory from the original one. If there is no such restriction, then the task may lose its meaning, because then you can select an area that covers all obstacles and sources, and bypass it around the perimeter. Therefore, we search for a local extremum that corresponds to an acceptable curve in the sense of the specified restriction. The iterative procedure proposed in this paper allows us to search for the corresponding local maxima of the probability of RTP passage in the field of several randomly located and oriented sources, in some neighborhood of the initial trajectory. First, the problem of trajectory optimization is set and solved under the condition of movement in the field of single source with the scope in the form of a circular sector, then the result is extended to the case of several similar sources. The main problem of the study is the choice of the General form of the functional at each point of the initial curve, as well as its adjustment coefficients. It is shown that the selection of these coefficients is an adaptive procedure, the input variables of which are characteristic geometric values describing the current trajectory in the source field. Standard median smoothing procedures are used to eliminate oscillations that occur as a result of the locality of the proposed procedure. The simulation results show the high efficiency of the proposed procedure for correcting the previously planned trajectory.

Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1117
Author(s):  
Bin Li ◽  
Zhikang Jiang ◽  
Jie Chen

Computing the sparse fast Fourier transform (sFFT) has emerged as a critical topic for a long time because of its high efficiency and wide practicability. More than twenty different sFFT algorithms compute discrete Fourier transform (DFT) by their unique methods so far. In order to use them properly, the urgent topic of great concern is how to analyze and evaluate the performance of these algorithms in theory and practice. This paper mainly discusses the technology and performance of sFFT algorithms using the aliasing filter. In the first part, the paper introduces the three frameworks: the one-shot framework based on the compressed sensing (CS) solver, the peeling framework based on the bipartite graph and the iterative framework based on the binary tree search. Then, we obtain the conclusion of the performance of six corresponding algorithms: the sFFT-DT1.0, sFFT-DT2.0, sFFT-DT3.0, FFAST, R-FFAST, and DSFFT algorithms in theory. In the second part, we make two categories of experiments for computing the signals of different SNRs, different lengths, and different sparsities by a standard testing platform and record the run time, the percentage of the signal sampled, and the L0, L1, and L2 errors both in the exactly sparse case and the general sparse case. The results of these performance analyses are our guide to optimize these algorithms and use them selectively.


2014 ◽  
Vol 2014 ◽  
pp. 1-8
Author(s):  
Cao Taiqiang ◽  
Chen Zhangyong ◽  
Wang Jun ◽  
Sun Zhang ◽  
Luo Qian ◽  
...  

In order to implement a high-efficiency bridgeless power factor correction converter, a new topology and operation principles of continuous conduction mode (CCM) and DC steady-state character of the converter are analyzed, which show that the converter not only has bipolar-gain characteristic but also has the same characteristic as the traditional Boost converter, while the voltage transfer ratio is not related with the resonant branch parameters and switching frequency. Based on the above topology, a novel bridgeless Bipolar-Gain Pseudo-Boost PFC converter is proposed. With this converter, the diode rectifier bridge of traditional AC-DC converter is eliminated, and zero-current switching of fast recovery diode is achieved. Thus, the efficiency is improved. Next, we also propose the one-cycle control policy of this converter. Finally, experiments are provided to verify the accuracy and feasibility of the proposed converter.


2021 ◽  
Vol 15 (3) ◽  
pp. 1-33
Author(s):  
Jingjing Wang ◽  
Wenjun Jiang ◽  
Kenli Li ◽  
Keqin Li

CANDECOMP/PARAFAC (CP) decomposition is widely used in various online social network (OSN) applications. However, it is inefficient when dealing with massive and incremental data. Some incremental CP decomposition (ICP) methods have been proposed to improve the efficiency and process evolving data, by updating decomposition results according to the newly added data. The ICP methods are efficient, but inaccurate because of serious error accumulation caused by approximation in the incremental updating. To promote the wide use of ICP, we strive to reduce its cumulative errors while keeping high efficiency. We first differentiate all possible errors in ICP into two types: the cumulative reconstruction error and the prediction error. Next, we formulate two optimization problems for reducing the two errors. Then, we propose several restarting strategies to address the two problems. Finally, we test the effectiveness in three typical dynamic OSN applications. To the best of our knowledge, this is the first work on reducing the cumulative errors of the ICP methods in dynamic OSNs.


Proceedings ◽  
2018 ◽  
Vol 2 (22) ◽  
pp. 1400
Author(s):  
Johannes Schmelcher ◽  
Max Kleine Büning ◽  
Kai Kreisköther ◽  
Dieter Gerling ◽  
Achim Kampker

Energy-efficient electric motors are gathering an increased attention since they are used in electric cars or to reduce operational costs, for instance. Due to their high efficiency, permanent-magnet synchronous motors are used progressively more. However, the need to use rare-earth magnets for such high-efficiency motors is problematic not only in regard to the cost but also in socio-political and environmental aspects. Therefore, an increasing effort has to be put in finding the best design possible. The goals to achieve are, among others, to reduce the amount of rare-earth magnet material but also to increase the efficiency. In the first part of this multipart paper, characteristics of optimization problems in engineering and general methods to solve them are presented. In part two, different approaches to the design optimization problem of electric motors are highlighted. The last part will evaluate the different categories of optimization methods with respect to the criteria: degrees of freedom, computing time and the required user experience. As will be seen, there is a conflict of objectives regarding the criteria mentioned above. Requirements, which a new optimization method has to fulfil in order to solve the conflict of objectives will be presented in this last paper.


1991 ◽  
Vol 4 (2) ◽  
pp. 207-241 ◽  
Author(s):  
R H Kruse ◽  
W H Puckett ◽  
J H Richardson

The biological safety cabinet is the one piece of laboratory and pharmacy equipment that provides protection for personnel, the product, and the environment. Through the history of laboratory-acquired infections from the earliest published case to the emergence of hepatitis B and AIDS, the need for health care worker protection is described. A brief description with design, construction, function, and production capabilities is provided for class I and class III safety cabinets. The development of the high-efficiency particulate air filter provided the impetus for clean room technology, from which evolved the class II laminar flow biological safety cabinet. The clean room concept was advanced when the horizontal airflow clean bench was manufactured; it became popular in pharmacies for preparing intravenous solutions because the product was protected. However, as with infectious microorganisms and laboratory workers, individual sensitization to antibiotics and the advent of hazardous antineoplastic agents changed the thinking of pharmacists and nurses, and they began to use the class II safety cabinet to prevent adverse personnel reactions to the drugs. How the class II safety cabinet became the mainstay in laboratories and pharmacies is described, and insight is provided into the formulation of National Sanitation Foundation standard number 49 and its revisions. The working operations of a class II cabinet are described, as are the variations of the four types with regard to design, function, air velocity profiles, and the use of toxins. The main certification procedures are explained, with examples of improper or incorrect certifications. The required levels of containment for microorganisms are given. Instructions for decontaminating the class II biological safety cabinet of infectious agents are provided; unfortunately, there is no method for decontaminating the cabinet of antineoplastic agents.


2021 ◽  
Author(s):  
Mingxuan Zhao ◽  
Yulin Han ◽  
Jian Zhou

Abstract The operational law put forward by Zhou et al. on strictly monotone functions with regard to regular LR fuzzy numbers makes a valuable push to the development of fuzzy set theory. However, its applicable conditions are confined to strictly monotone functions and regular LR fuzzy numbers, which restricts its application in practice to a certain degree. In this paper, we propose an extensive operational law that generalizes the one proposed by Zhou et al. to apply to monotone (but not necessarily strictly monotone) functions with regard to regular LR fuzzy intervals (LR-FIs), of which regular fuzzy numbers can be regarded as particular cases. By means of the extensive operational law, the inverse credibility distributions (ICDs) of monotone functions regarding regular LR-FIs can be calculated efficiently and effectively. Moreover, the extensive operational law has a wider range of applications, which can deal with the situations hard to be handled by the original operational law. Subsequently, based on the extensive operational law, the computational formulae for expected values (EVs) of LR-FIs and monotone functions with regard to regular LR-FIs are presented. Furthermore, the proposed operational law is also applied to dispose fuzzy optimization problems with regular LR-FIs, for which a solution strategy is provided, where the fuzzy programming is converted to a deterministic equivalent first and then a newly-devised solution algorithm is utilized. Finally, the proposed solution strategy is applied to a purchasing planning problem, whose performances are evaluated by comparing with the traditional fuzzy simulation-based genetic algorithm. Experimental results indicate that our method is much more efficient, yielding high-quality solutions within a short time.


Coping is an important component in adapting a person to stressful events and maintaining a psychological balance. The aim of this work was to study the features of coping in patients with cerebrovascular pathology (CVP) in the dynamics of its development at different stages of the disease. At Kharkiv Regional Clinical Hospital - Emergency and Emergency Medicine Center during 2016-2018, observed 383 patients with cerebrovascular pathology on different stage of diseases. The coping assessed by using the Ways of Coping Questionnaire R. Lazarus & S. Folkman. In persons with high risk of CVP, clinical manifestations of CVP and patients after a stroke generally defined more tension of coping than in somatic healthy people. There occurred an imbalance forms of coping with low and high efficiency, dominated confrontation, distancing, avoidance versus problem solving, positive revaluation, increasing the role of social support as external psychosocial resource. Therefore, patients at various stages of CVP had unstable stress coping-profile that was on the one hand the basis for the development of stressrelated psychosomatic changes, on the other – not correctly solve the existing stress. Detection and psychological correction of ineffective coping strategies in patients with CVP is an important component of psychological help for this contingent of patients.


Author(s):  
Ștefana Stăcescu ◽  
Gabriel Hancu ◽  
Denisa Podar ◽  
Ștefania Todea ◽  
Amelia Tero-Vescan

Relatively few medications are available for the management of obesity and all are indicated as adjuncts to increased physical activity, caloric restriction and lifestyle modification. Among different weight-loss drugs, the most intriguing and controversial class is the one of anorexic amphetamines, due to their high efficiency but also relevant side-effects. Several previously approved anorexic amphetamines like fenfluramine, phenylpropanolamine, phenmetrazine and sibutramine have been withdrawn from the market due to unanticipated adverse effects. Nowadays only four amphetamine derivatives are approved for short-term treatment of obesity: amfepramone, benzphetamine, phendimetrazine and phentermine. The article provides an overview of both the history, and the current status, of the use of amphetamine derivatives in the obesity pharmacotherapy. J Pharm Care 2019; 7(3): 75-82.


2021 ◽  
Vol Volume 2 (Original research articles) ◽  
Author(s):  
Matúš Benko ◽  
Patrick Mehlitz

Implicit variables of a mathematical program are variables which do not need to be optimized but are used to model feasibility conditions. They frequently appear in several different problem classes of optimization theory comprising bilevel programming, evaluated multiobjective optimization, or nonlinear optimization problems with slack variables. In order to deal with implicit variables, they are often interpreted as explicit ones. Here, we first point out that this is a light-headed approach which induces artificial locally optimal solutions. Afterwards, we derive various Mordukhovich-stationarity-type necessary optimality conditions which correspond to treating the implicit variables as explicit ones on the one hand, or using them only implicitly to model the constraints on the other. A detailed comparison of the obtained stationarity conditions as well as the associated underlying constraint qualifications will be provided. Overall, we proceed in a fairly general setting relying on modern tools of variational analysis. Finally, we apply our findings to different well-known problem classes of mathematical optimization in order to visualize the obtained theory. Comment: 34 pages


Sign in / Sign up

Export Citation Format

Share Document