scholarly journals Akuisisi Foreground dan Background Berbasis Fitur DTC pada Matting Citra secara Otomatis

2020 ◽  
Vol 7 (3) ◽  
pp. 547
Author(s):  
Meidya Koeshardianto ◽  
Eko Mulyanto Yuniarno ◽  
Mochamad Hariadi

<p>Teknik pemisahan <em>foreground</em> dari <em>background</em> pada citra statis merupakan penelitian yang sangat diperlukan dalam <em>computer vision</em>. Teknik yang sering digunakan adalah <em>image segmentation,</em> namun hasil ekstraksinya masih kurang akurat. <em>Image matting</em> menjadi salah satu solusi untuk memperbaiki hasil dari <em>image segmentation</em>. Pada metode <em>supervised</em>, <em>image matting</em> membutuhkan <em>scribbles</em> atau <em>trimap</em> sebagai <em>constraint</em> yang berfungsi untuk melabeli daerah tersebut adalah <em>foreground</em> atau <em>background</em>. Pada makalah ini dibangun metode <em>unsupervised</em> dengan mengakuisisi <em>foreground</em> dan <em>background</em> sebagai <em>constraint</em> secara otomatis. Akuisisi <em>background</em> ditentukan dari varian nilai fitur DCT (<em>Discrete Cosinus Transform</em>) yang dikelompokkan menggunakan algoritme <em>k-means</em>. Untuk mengakuisisi <em>foreground</em> ditentukan dari subset hasil klaster fitur DCT dengan fitur <em>edge detection.</em> Hasil dari proses akuisisi <em>foreground</em> dan <em>background</em> tersebut dijadikan sebagai <em>constraint</em>. Perbedaan hasil dari penelitian diukur menggunakan MAE (<em>Mean Absolute Error</em>) dibandingkan dengan metode <em>supervised matting</em> maupun dengan metode <em>unsupervised matting</em> lainnya. Skor MAE dari hasil eksperimen menunjukkan bahwa nilai <em>alpha matte</em> yang dihasilkan mempunyai perbedaan 0,0336 serta selisih waktu proses 0,4 detik dibandingkan metode <em>supervised matting</em>. Seluruh data citra berasal dari citra yang telah digunakan para peneliti sebelumnya</p><p><em><strong>Abstract</strong></em></p><p class="Abstract"><em>The technique of separating the foreground and the background from a still image is widely used in computer vision. Current research in this technique is image segmentation. However, the result of its extraction is considered inaccurate. Furthermore, image matting is one solution to improve the effect of image segmentation. Mostly, the matting process used scribbles or trimap as a constraint, which is done manually as called a supervised method. The contribution offered in this paper lies in the acquisition of foreground and background that will be used to build constraints automatically. Background acquisition is determined from the variant value of the DCT feature that is clustered using the k-means algorithm. Foreground acquisition is determined by a subset resulting from clustering DCT values with edge detection features. The results of the two stages will be used as an automatic constraint method. The success of the proposed method, the constraint will be used in the supervised matting method. The difference in results from In the research experiment was measured using MAE (Mean Absolute Error) compared with the supervised matting method and with other unsupervised matting methods. The MAE score from the experimental results shows that the alpha matte value produced has a difference of 0.336, and the difference in processing time is 0.4 seconds compared to the supervised matting method. All image data comes from images that have been used by previous researchers.</em><strong></strong></p><p><em><strong><br /></strong></em></p>

2022 ◽  
Vol 13 ◽  
Author(s):  
Niklas Wulms ◽  
Lea Redmann ◽  
Christine Herpertz ◽  
Nadine Bonberg ◽  
Klaus Berger ◽  
...  

Introduction: White matter hyperintensities of presumed vascular origin (WMH) are an important magnetic resonance imaging marker of cerebral small vessel disease and are associated with cognitive decline, stroke, and mortality. Their relevance in healthy individuals, however, is less clear. This is partly due to the methodological challenge of accurately measuring rare and small WMH with automated segmentation programs. In this study, we tested whether WMH volumetry with FMRIB software library v6.0 (FSL; https://fsl.fmrib.ox.ac.uk/fsl/fslwiki) Brain Intensity AbNormality Classification Algorithm (BIANCA), a customizable and trainable algorithm that quantifies WMH volume based on individual data training sets, can be optimized for a normal aging population.Methods: We evaluated the effect of varying training sample sizes on the accuracy and the robustness of the predicted white matter hyperintensity volume in a population (n = 201) with a low prevalence of confluent WMH and a substantial proportion of participants without WMH. BIANCA was trained with seven different sample sizes between 10 and 40 with increments of 5. For each sample size, 100 random samples of T1w and FLAIR images were drawn and trained with manually delineated masks. For validation, we defined an internal and external validation set and compared the mean absolute error, resulting from the difference between manually delineated and predicted WMH volumes for each set. For spatial overlap, we calculated the Dice similarity index (SI) for the external validation cohort.Results: The study population had a median WMH volume of 0.34 ml (IQR of 1.6 ml) and included n = 28 (18%) participants without any WMH. The mean absolute error of the difference between BIANCA prediction and manually delineated masks was minimized and became more robust with an increasing number of training participants. The lowest mean absolute error of 0.05 ml (SD of 0.24 ml) was identified in the external validation set with a training sample size of 35. Compared to the volumetric overlap, the spatial overlap was poor with an average Dice similarity index of 0.14 (SD 0.16) in the external cohort, driven by subjects with very low lesion volumes.Discussion: We found that the performance of BIANCA, particularly the robustness of predictions, could be optimized for use in populations with a low WMH load by enlargement of the training sample size. Further work is needed to evaluate and potentially improve the prediction accuracy for low lesion volumes. These findings are important for current and future population-based studies with the majority of participants being normal aging people.


Author(s):  
J. Choi ◽  
L. Zhu ◽  
H. Kurosu

In the current study, we developed a methodology for detecting cracks in the surface of paved road using 3D digital surface model of road created by measuring with three-dimensional laser scanner which works on the basis of the light-section method automatically. For the detection of cracks from the imagery data of the model, the background subtraction method (Rolling Ball Background Subtraction Algorithm) was applied to the data for filtering out the background noise originating from the undulation and gradual slope and also for filtering the ruts that were caused by wearing, aging and excessive use of road and other reasons. We confirmed the influence from the difference in height (depth) caused by forgoing reasons included in a data can be reduced significantly at this stage. Various parameters of ball radius were applied for checking how the result of data obtained with this process vary according to the change of parameter and it becomes clear that there are not important differences by the change of parameters if they are in a certain range radius. And then, image segmentation was performed by multi-resolution segmentation based on the object-based image analysis technique. The parameters for the image segmentation, scale, pixel value (height/depth) and the compactness of objects were used. For the classification of cracks in the database, the height, length and other geometric property are used and we confirmed the method is useful for the detection of cracks in a paved road surface.


Author(s):  
J. Choi ◽  
L. Zhu ◽  
H. Kurosu

In the current study, we developed a methodology for detecting cracks in the surface of paved road using 3D digital surface model of road created by measuring with three-dimensional laser scanner which works on the basis of the light-section method automatically. For the detection of cracks from the imagery data of the model, the background subtraction method (Rolling Ball Background Subtraction Algorithm) was applied to the data for filtering out the background noise originating from the undulation and gradual slope and also for filtering the ruts that were caused by wearing, aging and excessive use of road and other reasons. We confirmed the influence from the difference in height (depth) caused by forgoing reasons included in a data can be reduced significantly at this stage. Various parameters of ball radius were applied for checking how the result of data obtained with this process vary according to the change of parameter and it becomes clear that there are not important differences by the change of parameters if they are in a certain range radius. And then, image segmentation was performed by multi-resolution segmentation based on the object-based image analysis technique. The parameters for the image segmentation, scale, pixel value (height/depth) and the compactness of objects were used. For the classification of cracks in the database, the height, length and other geometric property are used and we confirmed the method is useful for the detection of cracks in a paved road surface.


2020 ◽  
Author(s):  
Takeshi Teshigawara ◽  
Akira Meguro ◽  
Nobuhisa Mizuki

Abstract Background: We assessed the accuracy and tendency of the VERION image-guided system (Alcon) and the intra-ocular lens (IOL) Master 700 (Zeiss), by comparing mean refractive shift (MRS) of predicted post-operative refraction (PPR), mean absolute error (MAE) of PPR, recommended IOL power (RIP) and K-value before and after optimizing the IOL-constant in VERION, to show the importance of optimization.Methods: This retrospective study involved 72 eyes. K-value was measured with both biometers. Axial length (AL) and anterior chamber depth (ACD) measured by the IOL Master were applied to the VERION because it cannot measure these variables. The User group for Laser Interference Biometry (ULIB) IOL-constant for the IOL Master was applied to the VERION before optimizing the IOL constant, since no such official measure was established for it. MRS of PPR, MAE of PPR, RIP and K-value as measured by both biometers were compared before and after optimizing the IOL-constant in the VERION. Finally, correlations between the MRS, MAE, RIP, and K-value were analyzed in the VERION. The Wilcoxon signed-rank test was used for analysis.Results: Compared to the IOL Master, K-value was significantly higher in the VERION. Prior to optimization, MRS of PPR showed a significant myopic shift in the VERION, and MAE of PPR was significantly higher. Additionally, RIP in the VERION was significantly lower. After optimization, there were no significant differences in the MRS of PPR and RIP between the VERION and IOL Master. MAE of PPR in the IOL Master was significantly higher than in the VERION. No significant correlations were found between MRS and MAE of PPR and RIP with K-value in the VERION. Conclusions: Before optimization, the VERION was less reliable in MRS, MAE and RIP than the IOL Master. However, after optimization, the difference in MRS and RIP between the two devices became insignificant. This study indicates that optimization of IOL-constant in the VERION is vital. After optimization, the VERION is more accurate in PPR than the IOL Master.


Author(s):  
Hong Shen

In this chapter, we will give an intuitive introduction to the general problem of 3D medical image segmentation. We will give an overview of the popular and relevant methods that may be applicable, with a discussion about their advantages and limits. Specifically, we will discuss the issue of incorporating prior knowledge into the segmentation of anatomic structures and describe in detail the concept and issues of knowledge-based segmentation. Typical sample applications will accompany the discussions throughout this chapter. We hope this will help an application developer to improve insights in the understanding and application of various computer vision approaches to solve real-world problems of medical image segmentation.


2020 ◽  
Vol 38 (3) ◽  
pp. 725-748
Author(s):  
Gizaw Mengistu Tsidu ◽  
Mulugeta Melaku Zegeye

Abstract. Earth's ionosphere is an important medium of radio wave propagation in modern times. However, the effective use of the ionosphere depends on the understanding of its spatiotemporal variability. Towards this end, a number of ground- and space-based monitoring facilities have been set up over the years. The information from these stations has also been complemented by model-based studies. However, assessment of the performance of ionospheric models in capturing observations needs to be conducted. In this work, the performance of the IRI-2016 model in simulating the total electron content (TEC) observed by a network of Global Positioning System (GPS) receivers is evaluated based on the RMSE, the bias, the mean absolute error (MAE) and skill score, the normalized mean bias factor (NMBF), the normalized mean absolute error factor (NMAEF), the correlation, and categorical metrics such as the quantile probability of detection (QPOD), the quantile categorical miss (QCM), and the quantile critical success index (QCSI). The IRI-2016 model simulations are evaluated against gridded International Global Navigation Satellite System (GNSS) Service (IGS) GPS-TEC and TEC observations at a network of GPS receiver stations during the solar minima in 2008 and solar maxima in 2013. The phases of modeled and simulated TEC time series agree strongly over most of the globe, as indicated by a high correlations during all solar activities with the exception of the polar regions. In addition, lower RMSE, MAE, and bias values are observed between the modeled and measured TEC values during the solar minima than during the solar maxima from both sets of observations. The model performance is also found to vary with season, longitude, solar zenith angle, and magnetic local time. These variations in the model skill arise from differences between seasons with respect to solar irradiance, the direction of neutral meridional winds, neutral composition, and the longitudinal dependence of tidally induced wave number four structures. Moreover, the variation in model performance as a function of solar zenith angle and magnetic local time might be linked to the accuracy of the ionospheric parameters used to characterize both the bottom- and topside ionospheres. However, when the NMBF and NMAEF are applied to the data sets from the two distinct solar activity periods, the difference in the skill of the model during the two periods decreases, suggesting that the traditional model evaluation metrics exaggerate the difference in model skill. Moreover, the performance of the model in capturing the highest ends of extreme values over the geomagnetic equator, midlatitudes, and high latitudes is poor, as noted from the decrease in the QPOD and QCSI as well as an increase in the QCM over most of the globe with an increase in the threshold percentile TEC values from 10 % to 90 % during both the solar minimum and the solar maximum periods. The performance of IRI-2016 in simulating observed low (as low as the 10th percentile) and high (higher than the 90th percentile) TEC correctly over equatorial ionization anomaly (EIA) crest regions is reasonably good given that IRI-2016 is a climatological model. However, it is worth noting that the performance of the IRI-2016 model is relatively poor in 2013 compared with 2008 at the highest ends of the TEC distribution. Therefore, this study reveals the strengths and weaknesses of the IRI-2016 model in simulating the observed TEC distribution correctly during all seasons and solar activities for the first time.


2011 ◽  
pp. 1144-1161
Author(s):  
Hong Shen

In this chapter, we will give an intuitive introduction to the general problem of 3D medical image segmentation. We will give an overview of the popular and relevant methods that may be applicable, with a discussion about their advantages and limits. Specifically, we will discuss the issue of incorporating prior knowledge into the segmentation of anatomic structures and describe in detail the concept and issues of knowledge-based segmentation. Typical sample applications will accompany the discussions throughout this chapter. We hope this will help an application developer to improve insights in the understanding and application of various computer vision approaches to solve real-world problems of medical image segmentation.


Author(s):  
Hong Shen

In this chapter, we will give an intuitive introduction to the general problem of 3D medical image segmentation. We will give an overview of the popular and relevant methods that may be applicable, with a discussion about their advantages and limits. Specifically, we will discuss the issue of incorporating prior knowledge into the segmentation of anatomic structures and describe in detail the concept and issues of knowledge-based segmentation. Typical sample applications will accompany the discussions throughout this chapter. We hope this will help an application developer to improve insights in the understanding and application of various computer vision approaches to solve real-world problems of medical image segmentation.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Henrique Aragão Arruda ◽  
Joana M. Pereira ◽  
Arminda Neves ◽  
Maria João Vieira ◽  
Joana Martins ◽  
...  

AbstractAnalysis of refractive outcomes, using biometry data collected with a new biometer (Pentacam-AXL, OCULUS, Germany) and a reference biometer (Lenstar LS 900, HAAG-STREIT AG, Switzerland), in order to assess differences in the predicted and actual refraction using different formulas. Prospective, institutional study, in which intraocular lens (IOL) calculation was performed using the Haigis, SRK/T and Hoffer Q formulas with the two systems in patients undergoing cataract surgery between November 2016 and August 2017. Four to 6 weeks after surgery, the spherical equivalent (SE) was derived from objective refraction. Mean prediction error (PE), mean absolute error (MAE) and the median absolute error (MedAE) were calculated. The percentage of eyes within ± 0.25, ± 0.50, ± 1.00, and ± 2.00 D of MAE was determined. 104 eyes from 76 patients, 35 males (46.1%), underwent uneventful phacoemulsification with IOL implantation. Mean SE after surgery was − 0.29 ± 0.46 D. Mean prediction error (PE) using the SRK/T, Haigis and Hoffer Q formulas with the Lenstar was significantly different (p > 0.0001) from PE calculated with the Pentacam in all three formulas. Percentage of eyes within ± 0.25 D MAE were larger with the Lenstar device, using all three formulas. The difference between the actual refractive error and the predicted refractive error is consistently lower when using Lenstar. The Pentacam-AXL user should be alert to the critical necessity of constant optimization in order to obtain optimal refractive results.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Ali Alqerban ◽  
Muath Alrashed ◽  
Ziyad Alaskar ◽  
Khalid Alqahtani

Abstract Background The aims of this study were to create a method for estimation of dental age in Saudi children and adolescents based on the Willems model developed using the Belgian Caucasian (BC) reference data and to compare the ability of the two models to predict age in Saudi children. Methods Development of the seven lower left permanent mandibular teeth was staged in 1146 panoramic radiographs from healthy Saudi children (605 male, 541 female) without missing permanent teeth and without all permanent teeth fully developed (except third molars). The data were used to validate the Willems BC model and to construct a Saudi Arabian-specific (Willems SA) model. The mean error, mean absolute error, and root mean square error obtained from both validations were compared to quantify the variance in errors in the sample. Results The overall mean error for the Willems SA method was 0.023 years (standard deviation, ± 0.55), indicating no systematic underestimation or overestimation of age. For girls, the error using the Willems SA method was significantly lower but still negligible at 0.06 years. A small but statistically significant difference in total mean absolute error (11 days) was found between the Willems BC and Willems SA models when the data were compared independent of sex. The overall mean absolute error for girls was slightly lower for the Willems BC method than for the Willems SA method (1.33 years vs. 1.37 years). Conclusions The difference in ability to predict dental age between the Willems BC and Willems SA methods is very small, indicating that the data from the BC population can be used as a reference in the Saudi population.


Sign in / Sign up

Export Citation Format

Share Document