scholarly journals On the accuracy of detailed model inductance matrix estimation for air core winding

2017 ◽  
Vol 66 (4) ◽  
pp. 787-799 ◽  
Author(s):  
Jafar Nosratian Ahour ◽  
Saeed Seyedtabaii ◽  
Gevork B. Gharehpetian

Abstract Researchers have used various methods to determine the parameters of transformer- equivalent circuits in transient studies. But most of these previous algorithms had difficulty finding the equivalent circuit parameters in a bigger model. This paper presents a new method to extract the inductance matrix of a detailed model for an air core winding for transient studies using frequency-response measurement data. This matrix can be determined with acceptable accuracy by using the proposed method. The biggest advantage of the proposed method is a reduction in the search space, and thus, speedier problemsolving. Simulations showed that the use of the proposed method leads to better behavioural quality of a transformer winding. The simulation results of the previous and proposed methods were compared with the help of a 20/0.4 kV, 1600 kVA transformer. This comparison showed the accuracy and superiority of the proposed method.

2015 ◽  
Vol 2015 ◽  
pp. 1-15 ◽  
Author(s):  
Hao Guo ◽  
Zhongming Pan ◽  
Zhiping Huang ◽  
Jing Zhou

As wireless sensor networks (WSNs) often provide incorrect and outdated information about the events in a monitored environment, quality of information (QoI) assessment is invaluable for users to manage and use the information in particular applications. In this paper, we propose a flexible framework to dynamically assess the QoI in different WSN applications, with focus on accuracy and timeliness. Our framework is constructed on the infrastructure of an information aggregation procedure under some assumptions about the network. Based on information fusion theory, two processing models are adopted to assess the accuracy of low-level measurement data and high-level decision information without the need for Ground Truth (GT). Meanwhile, our framework generally exploits two respective models according to the specific category of the information timeliness in different delay-sensitive applications. To quantify the timeliness, we utilize a practical measurement method by means of timestamp to determine the information acquisition time. The framework is evaluated by simulations, including accuracy assessment in two environmental monitoring application scenarios, and timeliness assessment in two delay-sensitive application scenarios. The simulation results show that our framework is effective and flexible for quantitative assessment of the QoI in different WSN applications.


2021 ◽  
Author(s):  
Ekaterina Svechnikova ◽  
Nikolay Ilin ◽  
Evgeny Mareev

<p>The use of numerical modeling for atmospheric research is complicated by the problem of verification by a limited set of measurement data. Comparison with radar measurements is widely used for assessing the quality of the simulation. The probabilistic nature of the development of convective phenomena determines the complexity of the verification process: the reproduction of the pattern of the convective event is prior to the quantitative agreement of the values at a particular point at a particular moment.</p><p>We propose a method for verifying the simulation results based on comparing areas with the same reflectivity. The method is applied for verification of WRF-modeling of convective events in the Aragats highland massif in Armenia. It is shown that numerical simulation demonstrates approximately the same form of distribution of areas of equal reflectivity as for radar-measured reflectivity. In this case, the model tends to overestimate on average reflectivity, while enabling us to obtain the qualitatively correct description of the convective phenomenon.</p><p>The proposed technique can be used to verify the simulation results using data on reflectivity obtained by a satellite or a meteoradar. The technique allows one to avoid subjectivity in the interpretation of simulation results and estimate the quality of reproducing the “general pattern” of the convective event.</p>


2020 ◽  
Vol 25 (1) ◽  
pp. 20-42
Author(s):  
Fedorchenko I. ◽  
◽  
Oliinyk A. ◽  
Korniienko S. ◽  
Kharchenko A. ◽  
...  

The problem of combinatorial optimization is considered in relation to the choice of the location of the location of power supplies when solving the problem of the development of urban distribution networks of power supply. Two methods have been developed for placing power supplies and assigning consumers to them to solve this problem. The first developed method consists in placing power supplies of the same standard sizes, and the second - of different standard sizes. The fundamental difference between the created methods and the existing ones is that the proposed methods take into account all the material of the problem and have specialized methods for coding possible solutions, modified operators of crossing and selection. The proposed methods effectively solve the problem of low inheritance, topological unfeasibility of the found solutions, as a result of which the execution time is significantly reduced and the accuracy of calculations is increased. In the developed methods, the lack of taking into account the restrictions on the placement of new power supplies is realized, which made it possible to solve the problem of applying the methods for a narrow range of problems. A comparative analysis of the results obtained by placing power supplies of the same standard sizes and known methods was carried out, and it was found that the developed method works faster than the known methods. It is shown that the proposed approach ensures stable convergence of the search process by an acceptable number of steps without artificial limitation of the search space and the use of additional expert information on the feasibility of possible solutions. The results obtained allow us to propose effective methods to improve the quality of decisions made on the choice of the location of power supply facilities in the design of urban electrical.


Author(s):  
Tommy Hult ◽  
Abbas Mohammed

Efficient use of the available licensed radio spectrum is becoming increasingly difficult as the demand and usage of the radio spectrum increases. This usage of the spectrum is not uniform within the licensed band but concentrated in certain frequencies of the spectrum while other parts of the spectrum are inefficiently utilized. In cognitive radio environments, the primary users are allocated licensed frequency bands while secondary cognitive users dynamically allocate the empty frequencies within the licensed frequency band according to their requested QoS (Quality of Service) specifications. This dynamic decision-making is a multi-criteria optimization problem, which the authors propose to solve using a genetic algorithm. Genetic algorithms traverse the optimization search space using a multitude of parallel solutions and choosing the solution that has the best overall fit to the criteria. Due to this parallelism, the genetic algorithm is less likely than traditional algorithms to get caught at a local optimal point.


Author(s):  
Hicham El Hassani ◽  
Said Benkachcha ◽  
Jamal Benhra

Inspired by nature, genetic algorithms (GA) are among the greatest meta-heuristics optimization methods that have proved their effectiveness to conventional NP-hard problems, especially the traveling salesman problem (TSP) which is one of the most studied Supply chain management problems. This paper proposes a new crossover operator called Jump Crossover (JMPX) for solving the travelling salesmen problem using a genetic algorithm (GA) for near-optimal solutions, to conclude on its efficiency compared to solutions quality given by other conventional operators to the same problem, namely, Partially matched crossover (PMX), Edge recombination Crossover (ERX) and r-opt heuristic with consideration of computational overload. We adopt the path representation technique for our chromosome which is the most direct representation and a low mutation rate to isolate the search space exploration ability of each crossover. The experimental results show that in most cases JMPX can remarkably improve the solution quality of the GA compared to the two existing classic crossover approaches and the r-opt heuristic.


2020 ◽  
Vol 29 (6) ◽  
pp. 468-478
Author(s):  
Jean Connor ◽  
Lauren Hartwell ◽  
Jennifer Baird ◽  
Benjamin Cerrato ◽  
Araz Chiloyan ◽  
...  

Background Associations between the quality of nursing care and patient outcomes have been demonstrated globally. However, translation and application of this evidence to robust measurement in pediatric specialty nursing care has been limited. Objectives To test the feasibility and performance of nurse-sensitive measures in pediatric cardiovascular programs. Methods Ten nurse-sensitive measures targeting nursing workforce, care process, and patient outcomes were implemented, and measurement data were collected for 6 months across 9 children’s hospitals in the Consortium of Congenital Cardiac Care–Measurement of Nursing Practice (C4-MNP). Participating sites evaluated the feasibility of collecting data and the usability of the data. Results Variations in nursing workforce characteristics were reported across sites, including proportion of registered nurses with 0 to 2 years of experience, nursing education, and nursing certification. Clinical measurement data on weight gain in infants who have undergone cardiac surgery, unplanned transfer to the cardiac intensive care unit, and pain management highlighted opportunities for improvement in care processes. Overall, each measure received a score of 75% or greater in feasibility and usability. Conclusions Collaborative evaluation of measurement performance, feasibility, and usability provided important information for continued refinement of the measures, development of systems to support data collection, and selection of benchmarks across C4-MNP. Results supported the development of target benchmarks for C4-MNP sites to compare performance, share best practices for improving the quality of pediatric cardiovascular nursing care, and inform nurse staffing models.


Mathematics ◽  
2018 ◽  
Vol 7 (1) ◽  
pp. 17 ◽  
Author(s):  
Yanhong Feng ◽  
Haizhong An ◽  
Xiangyun Gao

Moth search (MS) algorithm, originally proposed to solve continuous optimization problems, is a novel bio-inspired metaheuristic algorithm. At present, there seems to be little concern about using MS to solve discrete optimization problems. One of the most common and efficient ways to discretize MS is to use a transfer function, which is in charge of mapping a continuous search space to a discrete search space. In this paper, twelve transfer functions divided into three families, S-shaped (named S1, S2, S3, and S4), V-shaped (named V1, V2, V3, and V4), and other shapes (named O1, O2, O3, and O4), are combined with MS, and then twelve discrete versions MS algorithms are proposed for solving set-union knapsack problem (SUKP). Three groups of fifteen SUKP instances are employed to evaluate the importance of these transfer functions. The results show that O4 is the best transfer function when combined with MS to solve SUKP. Meanwhile, the importance of the transfer function in terms of improving the quality of solutions and convergence rate is demonstrated as well.


2019 ◽  
Vol 62 (7) ◽  
pp. 2613-2651
Author(s):  
Grigorios Loukides ◽  
George Theodorakopoulos

AbstractA location histogram is comprised of the number of times a user has visited locations as they move in an area of interest, and it is often obtained from the user in the context of applications such as recommendation and advertising. However, a location histogram that leaves a user’s computer or device may threaten privacy when it contains visits to locations that the user does not want to disclose (sensitive locations), or when it can be used to profile the user in a way that leads to price discrimination and unsolicited advertising (e.g., as “wealthy” or “minority member”). Our work introduces two privacy notions to protect a location histogram from these threats: Sensitive Location Hiding, which aims at concealing all visits to sensitive locations, and Target Avoidance/Resemblance, which aims at concealing the similarity/dissimilarity of the user’s histogram to a target histogram that corresponds to an undesired/desired profile. We formulate an optimization problem around each notion: Sensitive Location Hiding ($${ SLH}$$SLH), which seeks to construct a histogram that is as similar as possible to the user’s histogram but associates all visits with nonsensitive locations, and Target Avoidance/Resemblance ($${ TA}$$TA/$${ TR}$$TR), which seeks to construct a histogram that is as dissimilar/similar as possible to a given target histogram but remains useful for getting a good response from the application that analyzes the histogram. We develop an optimal algorithm for each notion, which operates on a notion-specific search space graph and finds a shortest or longest path in the graph that corresponds to a solution histogram. In addition, we develop a greedy heuristic for the $${ TA}$$TA/$${ TR}$$TR problem, which operates directly on a user’s histogram. Our experiments demonstrate that all algorithms are effective at preserving the distribution of locations in a histogram and the quality of location recommendation. They also demonstrate that the heuristic produces near-optimal solutions while being orders of magnitude faster than the optimal algorithm for $${ TA}$$TA/$${ TR}$$TR.


Sign in / Sign up

Export Citation Format

Share Document