scholarly journals Hybrid algorithms for geospatial analysis of dam location points in protective tasks for protected areas

Author(s):  
S. Rodriguez Vasquez ◽  
N. V. Mokrova

Objective. In recent decades, criteria for identifying potential areas have evolved hand in hand with technological tools such as geographic information systems (GIS). However, the criteria for the preservation of protected areas are often not taken into account, thus causing damage to environmental biodiversity that can become irreparable. This paper presents the way of optimizing the process of locating key terrain points by developing a hybrid algorithm for geospatial analysis in QGIS. The goal is to speed up computational time, which is a critical variable for the entire key point detection process, and to suggest potential areas that do not pose a threat to biodiversity. Methods. The strategy used is based on two fundamental assumptions: identifying the tops of spatial objects (rivers) and analyzing the distances between spatial objects (rivers and adjacent territories). The tops extraction allows obtaining potential points, while the distance analysis allows selecting among them those points that are in the range acceptable for locating a dam, provided that the least possible damage to the biodiversity of the adjacent territory is caused. The algorithm was validated using the example of the hydrological network of Manicaragua, Cuba. Results. The results were compared in terms of the calculation time used, the number of valid tops extracted, and the percentage reduction in the total number of areas. This comparison was made using one, two and three vector layers (.shp) with spatial objects representing strategic protected areas. Conclusion. The results obtained show that the more representative the space data (.shp) used, the more effective the results obtained using the algorithm are in relation to environmental protection tasks. A reduction of up to 13% from originally detected key points has been achieved.

Jurnal INKOM ◽  
2014 ◽  
Vol 8 (1) ◽  
pp. 29 ◽  
Author(s):  
Arnida Lailatul Latifah ◽  
Adi Nurhadiyatna

This paper proposes parallel algorithms for precipitation of flood modelling, especially applied in spatial rainfall distribution. As an important input in flood modelling, spatial distribution of rainfall is always needed as a pre-conditioned model. In this paper two interpolation methods, Inverse distance weighting (IDW) and Ordinary kriging (OK) are discussed. Both are developed in parallel algorithms in order to reduce the computational time. To measure the computation efficiency, the performance of the parallel algorithms are compared to the serial algorithms for both methods. Findings indicate that: (1) the computation time of OK algorithm is up to 23% longer than IDW; (2) the computation time of OK and IDW algorithms is linearly increasing with the number of cells/ points; (3) the computation time of the parallel algorithms for both methods is exponentially decaying with the number of processors. The parallel algorithm of IDW gives a decay factor of 0.52, while OK gives 0.53; (4) The parallel algorithms perform near ideal speed-up.


2021 ◽  
Vol 235 ◽  
pp. 02031
Author(s):  
Siyue Liu

This paper explores the difficulties of building a service-oriented government by taking the evaluation results of public service satisfaction of Guizhou province in 2019 as an example. This paper finds that building a service-oriented government is the process of improving the quality of public service in an all-round way. With the steady improvement of the public service quality in China, the public’s expectation of the public service quality has been improved by changing from the original “yes or no” to the current “good or not”. In order to speed up the construction of service-oriented government, government departments should pay attention to the change of public demand and take the comfort, richness and transparency of public service as the key points of quality improvement.


2011 ◽  
Vol 11 (04) ◽  
pp. 571-587 ◽  
Author(s):  
WILLIAM ROBSON SCHWARTZ ◽  
HELIO PEDRINI

Fractal image compression is one of the most promising techniques for image compression due to advantages such as resolution independence and fast decompression. It exploits the fact that natural scenes present self-similarity to remove redundancy and obtain high compression rates with smaller quality degradation compared to traditional compression methods. The main drawback of fractal compression is its computationally intensive encoding process, due to the need for searching regions with high similarity in the image. Several approaches have been developed to reduce the computational cost to locate similar regions. In this work, we propose a method based on robust feature descriptors to speed up the encoding time. The use of robust features provides more discriminative and representative information for regions of the image. When the regions are better represented, the search for similar parts of the image can be reduced to focus only on the most likely matching candidates, which leads to reduction on the computational time. Our experimental results show that the use of robust feature descriptors reduces the encoding time while keeping high compression rates and reconstruction quality.


Author(s):  
Kari Lancaster ◽  
Tim Rhodes ◽  
Marsha Rosengarten

Background:In public health emergencies, evidence, intervention, decisions and translation proceed simultaneously, in greatly compressed timeframes, with knowledge and advice constantly in flux. Idealised approaches to evidence-based policy and practice are ill equipped to deal with the uncertainties arising in evolving situations of need. Key points for discussion:There is much to learn from rapid assessment and outbreak science approaches. These emphasise methodological pluralism, adaptive knowledge generation, intervention pragmatism, and an understanding of health and intervention as situated in their practices of implementation. The unprecedented challenges of novel viral outbreaks like COVID-19 do not simply require us to speed up existing evidence-based approaches, but necessitate new ways of thinking about how a more emergent and adaptive evidence-making might be done. The COVID-19 pandemic requires us to appraise critically what constitutes ‘evidence-enough’ for iterative rapid decisions in-the-now. There are important lessons for how evidence and intervention co-emerge in social practices, and for how evidence-making and intervening proceeds through dialogue incorporating multiple forms of evidence and expertise. Conclusions and implications:Rather than treating adaptive evidence-making and decision making as a break from the routine, we argue that this should be a defining feature of an ‘evidence-making intervention’ approach to health.


Author(s):  
Leonhard Gruber ◽  
Alexander Loizides ◽  
Siegfried Peer ◽  
Lisa Maria Walchhofer ◽  
Verena Spiss ◽  
...  

Background Peripheral nerve pathologies of the upper extremity are increasingly assessed by high-resolution ultrasonography (HRUS), yet rapid identification of nerve segments can be difficult due to small nerve diameters and complex regional anatomy. We propose a landmark-based approach to speed up and facilitate evaluation and intervention in this region. Method Relevant landmarks and section planes for eleven nerve segments of the forearm, wrist and hand were defined by ultrasonography in cadaver arms before cryosection and topographical neurovascular preparation. Information on all nerve segments and a pictorial guide including anatomical cross-sections, topographical preparations and HRUS images are provided. The identification rates of these nerve segments were then assessed in 20 healthy volunteers. Results and Conclusion Sonographic landmarks and guidelines for the rapid identification and assessment of nerves of the forearm, wrist and hand are presented in pictorial and tabular form, including discussion of normal variants. Utilizing this overview should facilitate training, diagnostic examinations and intervention for nerves of the upper extremity. Key Points:  Citation Format


2020 ◽  
Vol 2020 ◽  
pp. 1-24
Author(s):  
Yasemin Kocaoglu ◽  
Emre Cakmak ◽  
Batuhan Kocaoglu ◽  
Alev Taskin Gumus

Managing the distribution of goods is a vital operation for many companies. A successful distribution system requires an effective distribution strategy selection and optimum route planning at the right time and minimum cost. Furthermore, customer’s demand and location can vary from order to order. In this situation, a mixed delivery system is a good solution for it and allows the use of different strategies together to decrease delivery costs. Although the “distribution strategy selection” is a critical issue for companies, there are only a few studies that focus on the mixed delivery network problem. There is a need to propose an efficient solution for the mixed delivery problem to guide researchers and practitioners. This paper develops a new “modified” savings-based genetic algorithm which is named “distribution strategy selection and vehicle routing hybrid algorithm (DSSVRHA).” Our new algorithm aims to contribute to the literature a new hybrid solution to solve a mixed delivery network problem that includes three delivery modes: “direct shipment,” “milk run,” and “cross-docking” efficiently. It decides the appropriate distribution strategy and also optimal routes using a heterogeneous fleet of vehicles at minimum cost. The results of the hybrid algorithm are compared with the results of the optimization model. And the performance of the hybrid algorithm is validated with statistical analysis. The computational results reveal that our developed algorithm provides a good solution for reducing the supply chain distribution costs and computational time.


SPE Journal ◽  
2014 ◽  
Vol 20 (02) ◽  
pp. 294-305 ◽  
Author(s):  
S.E.. E. Gorucu ◽  
R.T.. T. Johns

Summary Phase-equilibrium calculations become computationally intensive in compositional simulation as the number of components and phases increases. Reduced methods were developed to address this problem, where the binary-interaction-parameter (BIP) matrix is approximated either by spectral decomposition (SD), as performed by Hendriks and van Bergen (1992), or with the two-parameter BIP formula of Li and Johns (2006). Several authors have recently stated that the SD method—and by reference all reduced methods—is not as fast as previously reported in the literature. In this paper we present the first study that compares all eight reduced and conventional methods published to date by use of optimized code and compilers. The results show that the SD method and its variants are not as fast as other reduced methods, and can be slower than the conventional approach when fewer than 10 components are used. These conclusions confirm the findings of recently published papers. The reason for the slow speed is the requirement that the code must allow for a variable number of eigenvalues. We show that the reduced method of Li and Johns (2006) and its variants, however, are faster because the number of reduced parameters is fixed to six, which is independent of the number of components. Speed up in flash calculations for their formula is achieved for all fluids studied when more than six components are used. For example, for 10-component fluids, a speed up of 2–3 in the computational time for Newton-Raphson (NR) iterations is obtained compared with the conventional method modeled after minimization of Gibbs energy. The reduced method modeled after the linearized approach of Nichita and Graciaa (2011), which uses the two-parameter BIP formula of Li and Johns (2006), is also demonstrated to have a significantly larger radius of convergence than other reduced and conventional methods for five fluids studied.


Open Physics ◽  
2016 ◽  
Vol 14 (1) ◽  
pp. 588-601 ◽  
Author(s):  
Yi Wang ◽  
Bo Yu ◽  
Shuyu Sun

AbstractFast prediction modeling via proper orthogonal decomposition method combined with Galerkin projection is applied to incompressible single-phase fluid flow in porous media. Cases for different configurations of porous media, boundary conditions and problem scales are designed to examine the fidelity and robustness of the model. High precision (relative deviation 1.0 × 10−4% ~ 2.3 × 10−1%) and large acceleration (speed-up 880 ~ 98454 times) of POD model are found in these cases. Moreover, the computational time of POD model is quite insensitive to the complexity of problems. These results indicate POD model is especially suitable for large-scale complex problems in engineering.


2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
Shi-Liang Wu ◽  
Cui-Xia Li

The finite difference method discretization of Helmholtz equations usually leads to the large spare linear systems. Since the coefficient matrix is frequently indefinite, it is difficult to solve iteratively. In this paper, a modified symmetric successive overrelaxation (MSSOR) preconditioning strategy is constructed based on the coefficient matrix and employed to speed up the convergence rate of iterative methods. The idea is to increase the values of diagonal elements of the coefficient matrix to obtain better preconditioners for the original linear systems. Compared with SSOR preconditioner, MSSOR preconditioner has no additional computational cost to improve the convergence rate of iterative methods. Numerical results demonstrate that this method can reduce both the number of iterations and the computational time significantly with low cost for construction and implementation of preconditioners.


2019 ◽  
Vol 8 (3) ◽  
pp. 7274-7279

Image mosaicing is a method where two or more pictures of the same image can be combined into a big picture and a high resolution panorama created. It is helpful for constructing a bigger picture with numerous overlapping pictures of the same scene. The image mosaic development is the union of two pictures. The significance of image mosaicing in the sector of computer vision, medical imaging, satellite data, army automatic target recognition can be seen. Picture stitching can be performed from a broad angle video taken from left to right to develop a wide-scale panorama to obtain a high-resolution picture. This research paper includes valuable content which will be very helpful for creating significant choices in vision-based apps and is intended primarily to establish a benchmark for scientists, regardless of their specific fields. In this paper it has been seen that distinct algorithms perform differently in terms of time complexity and image quality. We have looked at a variety of feature detectors and descriptors such as SIFT-SIFT, SURF-SURF, STAR-BRIEF and ORB-ORB for the development of video file panoramic images. We have noted that SIFT provides excellent outcomes, giving the image the largest amount of key points identified at the cost of computational time and SURF, ORB, has fewer key points obtained, where it has been seen that ORB is the simplest of the above algorithms, but produces no good performance quality image outcomes. A good compromise can be achieved with SURF. Depending on the application, the metric for image feature extraction would change. In addition, the speed of each algorithm is also recorded. This systemic analysis suggests many characteristics of the stitching of images.


Sign in / Sign up

Export Citation Format

Share Document