Precise and parallel segmentation model (PPSM) via MCET using hybrid distributions

2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Soha Rawas ◽  
Ali El-Zaart

PurposeImage segmentation is one of the most essential tasks in image processing applications. It is a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc. However, an accurate segmentation is a critical task since finding a correct model that fits a different type of image processing application is a persistent problem. This paper develops a novel segmentation model that aims to be a unified model using any kind of image processing application. The proposed precise and parallel segmentation model (PPSM) combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions. Moreover, a parallel boosting algorithm is proposed to improve the performance of the developed segmentation algorithm and minimize its computational cost. To evaluate the effectiveness of the proposed PPSM, different benchmark data sets for image segmentation are used such as Planet Hunters 2 (PH2), the International Skin Imaging Collaboration (ISIC), Microsoft Research in Cambridge (MSRC), the Berkley Segmentation Benchmark Data set (BSDS) and Common Objects in COntext (COCO). The obtained results indicate the efficacy of the proposed model in achieving high accuracy with significant processing time reduction compared to other segmentation models and using different types and fields of benchmarking data sets.Design/methodology/approachThe proposed PPSM combines the three benchmark distribution thresholding techniques to estimate an optimum threshold value that leads to optimum extraction of the segmented region: Gaussian, lognormal and gamma distributions.FindingsOn the basis of the achieved results, it can be observed that the proposed PPSM–minimum cross-entropy thresholding (PPSM–MCET)-based segmentation model is a robust, accurate and highly consistent method with high-performance ability.Originality/valueA novel hybrid segmentation model is constructed exploiting a combination of Gaussian, gamma and lognormal distributions using MCET. Moreover, and to provide an accurate and high-performance thresholding with minimum computational cost, the proposed PPSM uses a parallel processing method to minimize the computational effort in MCET computing. The proposed model might be used as a valuable tool in many oriented applications such as health-care systems, pattern recognition, traffic control, surveillance systems, etc.

2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Kamel Ettaieb ◽  
Sylvain Lavernhe ◽  
Christophe Tournier

Purpose This paper aims to propose an analytical thermal three-dimensional model that allows an efficient evaluation of the thermal effect of the laser-scanning path. During manufacturing by laser powder bed fusion (LPBF), the laser-scanning path influences the thermo-mechanical behavior of parts. Therefore, it is necessary to validate the path generation considering the thermal behavior induced by this process to improve the quality of parts. Design/methodology/approach The proposed model, based on the effect of successive thermal flashes along the scanning path, is calibrated and validated by comparison with thermal results obtained by FEM software and experimental measurements. A numerical investigation is performed to compare different scanning path strategies on the Ti6Al4V material with different stimulation parameters. Findings The simulation results confirm the effectiveness of the approach to simulate the thermal field to validate the scanning strategy. It suggests a change in the scale of simulation thanks to high-performance computing resources. Originality/value The flash-based approach is designed to ensure the quality of the simulated thermal field while minimizing the computational cost.


2017 ◽  
Vol 10 (13) ◽  
pp. 180
Author(s):  
Maheswari R ◽  
Pattabiraman V ◽  
Sharmila P

Objective: The prospective need of SIMD (Single Instruction and Multiple Data) applications like video and image processing in single system requires greater flexibility in computation to deliver high quality real time data. This paper performs an analysis of FPGA (Field Programmable Gate Array) based high performance Reconfigurable OpenRISC1200 (ROR) soft-core processor for SIMD.Methods: The ROR1200 ensures performance improvement by data level parallelism executing SIMD instruction simultaneously in HPRC (High Performance Reconfigurable Computing) at reduced resource utilization through RRF (Reconfigurable Register File) with multiple core functionalities. This work aims at analyzing the functionality of the reconfigurable architecture, by illustrating the implementation of two different image processing operations such as image convolution and image quality improvement. The MAC (Multiply-Accumulate) unit of ROR1200 used to perform image convolution and execution unit with HPRC is used for image quality improvement.Result: With parallel execution in multi-core, the proposed processor improves image quality by doubling the frame rate up-to 60 fps (frames per second) with peak power consumption of 400mWatt. Thus the processor gives a significant computational cost of 12ms with a refresh rate of 60Hz and 1.29ns of MAC critical path delay.Conclusion:This FPGA based processor becomes a feasible solution for portable embedded SIMD based applications which need high performance at reduced power consumptions


Circuit World ◽  
2020 ◽  
Vol 46 (4) ◽  
pp. 285-299
Author(s):  
Hamidreza Uoosefian ◽  
Keivan Navi ◽  
Reza Faghih Mirzaee ◽  
Mahdi Hosseinzadeh

Purpose The high demand for fast, energy-efficient, compact computational blocks in digital electronics has led the researchers to use approximate computing in applications where inaccuracy of outputs is tolerable. The purpose of this paper is to present two ultra-high-speed current-mode approximate full adders (FA) by using carbon nanotube field-effect transistors. Design/methodology/approach Instead of using threshold detectors, which are common elements in current-mode logic, diodes are used to stabilize voltage. Zener diodes and ultra-low-power diodes are used within the first and second proposed designs, respectively. This innovation eliminates threshold detectors from critical path and makes it shorter. Then, the new adders are employed in the image processing application of Laplace filter, which detects edges in an image. Findings Simulation results demonstrate very high-speed operation for the first and second proposed designs, which are, respectively, 44.7 per cent and 21.6 per cent faster than the next high-speed adder cell. In addition, they make a reasonable compromise between power-delay product (PDP) and other important evaluating factors in the context of approximate computing. They have very few transistors and very low total error distance. In addition, they do not propagate error to higher bit positions by generating output carry correctly. According to the investigations, up to four inexact FA can be used in the Laplace filter computations without a significant image quality loss. The employment of the first and second proposed designs results in 42.4 per cent and 32.2 per cent PDP reduction compared to when no approximate FA are used in an 8-bit ripple adder. Originality/value Two new current-mode inexact FA are presented. They use diodes as voltage regulators to design current-mode approximate full-adders with very short critical path for the first time.


2018 ◽  
Vol 90 (6) ◽  
pp. 962-966
Author(s):  
SungKwan Ku ◽  
Hojong Baik ◽  
Taehyoung Kim

Purpose The surveillance equipment is one of the most important parts for current air traffic control systems. It provides aircraft position and other relevant information including flight parameters. However, the existing surveillance equipment has certain position errors between true and detected positions. Operators must understand and account for the characteristics on magnitude and frequency of the position errors in the surveillance systems because these errors can influence the safety of aircraft operation. This study aims to develop the simulation model for analysis of these surveillance position errors to improve the safety of aircrafts in airports. Design/methodology/approach This study investigates the characterization of the position errors observed in airport surface detection equipment of an airport ground surveillance system and proposes a practical method to numerically reproduce the characteristics of the errors. Findings The proposed approach represents position errors more accurately than an alternative simple approach. This study also discusses the application of the computational results in a microscopic simulation modeling environment. Practical implications The surveillance error is analyzed from the radar trajectory data, and a random generator is configured to implement these data. These data are used in the air transportation simulation through an application programing interface, which can be applied to the aircraft trajectory data in the simulation. Subsequently, additionally built environment data are used in the actual simulation to obtain the results from the simulation engine. Originality/value The presented surveillance error analysis and simulation with its implementation plan are expected to be useful for air transportation safety simulations.


2012 ◽  
Vol 17 (4) ◽  
pp. 207-216 ◽  
Author(s):  
Magdalena Szymczyk ◽  
Piotr Szymczyk

Abstract The MATLAB is a technical computing language used in a variety of fields, such as control systems, image and signal processing, visualization, financial process simulations in an easy-to-use environment. MATLAB offers "toolboxes" which are specialized libraries for variety scientific domains, and a simplified interface to high-performance libraries (LAPACK, BLAS, FFTW too). Now MATLAB is enriched by the possibility of parallel computing with the Parallel Computing ToolboxTM and MATLAB Distributed Computing ServerTM. In this article we present some of the key features of MATLAB parallel applications focused on using GPU processors for image processing.


Author(s):  
Hiroshi Yamamoto ◽  
Yasufumi Nagai ◽  
Shinichi Kimura ◽  
Hiroshi Takahashi ◽  
Satoko Mizumoto ◽  
...  

2017 ◽  
Vol 1 (1) ◽  
pp. 41
Author(s):  
Angeliki Moisidou

A statistical analysis has been conducted with the aim to elucidate the effect of health care systems (HSs) on health inequalities assessed in terms of (a) differential access to health care services and (b) varying health outcomes among different models of HSs in EU-15 ((Beveridge: UK, IE, SE, FI, DK), (Bismarck: DE, FR, BE, LU, AT, NL), (Southern European model: GR, IT, ES, PT)). In the effort to interpret the results of the empirical analysis, we have ascertained systematic differences among the HSs in EU-15. Specifically, it is concluded that countries with Beveridge HS can be characterized more efficient (than average) in the most examined correlations, showing particularly high performance in the health sector. Similarly, countries with Bismarck HS record fairly satisfactory performance, but simultaneously they display more structural weaknesses compared with the Beveridge model. In addition, our empirical analysis has shown that adopting Bismarck model requires higher economic cost, compared with the Beveridge model, which is directly financed by taxation. On the contrary, in the countries with Southern European HS, the lowest performances are generally identified, which can be attributed to the residual social protection that characterizes these countries. The paper concludes with a synthesis of the empirical findings of our research. It proposes some directions for further research and presents a set of implications for policymakers regarding the planning and implementation of appropriate policies in order to tackle health inequality within HSs.


Sign in / Sign up

Export Citation Format

Share Document