scholarly journals Fractal Analysis of Laplacian Pyramidal Filters Applied to Segmentation of Soil Images

2014 ◽  
Vol 2014 ◽  
pp. 1-13
Author(s):  
J. de Castro ◽  
F. Ballesteros ◽  
A. Méndez ◽  
A. M. Tarquis

The laplacian pyramid is a well-known technique for image processing in which local operators of many scales, but identical shape, serve as the basis functions. The required properties to the pyramidal filter produce a family of filters, which is unipara metrical in the case of the classical problem, when the length of the filter is 5. We pay attention to gaussian and fractal behaviour of these basis functions (or filters), and we determine the gaussian and fractal ranges in the case of single parametera. These fractal filters loose less energy in every step of the laplacian pyramid, and we apply this property to get threshold values for segmenting soil images, and then evaluate their porosity. Also, we evaluate our results by comparing them with the Otsu algorithm threshold values, and conclude that our algorithm produce reliable test results.

ACS Omega ◽  
2021 ◽  
Author(s):  
Ilka Engelmann ◽  
Enagnon Kazali Alidjinou ◽  
Judith Ogiez ◽  
Quentin Pagneux ◽  
Sana Miloudi ◽  
...  

2015 ◽  
Vol 14 (02) ◽  
pp. 1550017
Author(s):  
Pichid Kittisuwan

The application of image processing in industry has shown remarkable success over the last decade, for example, in security and telecommunication systems. The denoising of natural image corrupted by Gaussian noise is a classical problem in image processing. So, image denoising is an indispensable step during image processing. This paper is concerned with dual-tree complex wavelet-based image denoising using Bayesian techniques. One of the cruxes of the Bayesian image denoising algorithms is to estimate the statistical parameter of the image. Here, we employ maximum a posteriori (MAP) estimation to calculate local observed variance with generalized Gamma density prior for local observed variance and Laplacian or Gaussian distribution for noisy wavelet coefficients. Evidently, our selection of prior distribution is motivated by efficient and flexible properties of generalized Gamma density. The experimental results show that the proposed method yields good denoising results.


Author(s):  
А. Муравьев ◽  
И. Тихомирова ◽  
А. Замышляев ◽  
П. Михайлов ◽  
Е. Петроченко ◽  
...  

Введение. Две микрореологические характеристики определяют кровоток в системе сосудов микроциркуляции — агрегация и деформируемость эритроцитов. В подавляющем большинстве приборов для регистрации агрегации эритроцитов (АЭ) отсутствует визуализация процесса, и интерпретация данных основывается на его косвенных характеристиках. Материалы и методы. Проведено исследование АЭ на созданной установке — агрегатоскопе, получены данные регистрации картины АЭ с последующей обработкой изображения с помощью специальной программы для ЭВМ. Информативность полученных данных была проверена в сравнительных исследованиях с использованием агрегометра эритроцитов Myrenne M1 и теста СОЭ. Результаты. Получены значительные положительные корреляции (r=0,90 и r=0,86, соответственно). Показано, что агрегатоскоп дает четкую картину изменения АЭ (ее снижение) при инкубации эритроцитов с хелатором Са2+ (ЭДТА, верапамил, изобутилметилксантин, монафрам). В ответ на инкубацию с препаратами другой группы, известными как стимуляторы АЭ (CaCl2, ионофор А23187, фенилэфрин, простагландин F2α), был получен достоверный прирост агрегации. Заключение. Метод агрегатоскопии в сочетании с программной обработкой изображений является удобным и надежным инструментом оценки суспензионной стабильности крови и точным методом измерения важной микрореологической характеристики эритроцитов — их агрегации. Introduction. Aggregation and deformability of erythrocytes are two microrheological characteristics that determine the blood fl ow in microcirculation vessels. In large majority of devices for registration of erythrocyte aggregation (EA) there is no visualization of the process, and the interpretation of the data is based on its indirect characteristics. Materials and methods. EA investigation was carried out on the created unit — aggregatoscop. Data were obtained for registration of EA followed by image processing using a special computer program. Data informativeness was verified in comparative studies using erythrocyte aggregometer Myrenne M1 and erythrocyte sedimentation rate test. Results. Significant positive correlations were obtained (r=0,90 and r=0,86, respectively). It was shown that aggregatoscop gave a clear picture of EA changes (its reduction) during erythrocytes incubation with Са2+ chelator (EDTA, verapamil, isobutylmethylxanthine, monaphram). Reliable increasing of aggregation was obtained in response to incubation with other agents — EA stimulants (CaCl2, ionophore A23187, phenylephrine, prostaglandin F2α). Conclusion. Aggregoscopy method in combination with software image processing is a convenient and reliable tool for assessing of blood suspension stability and an accurate method for measuring of important microrheological characteristics of erythrocytes — their aggregation.


2018 ◽  
Vol 2018 ◽  
pp. 1-16 ◽  
Author(s):  
Nhat-Duc Hoang

To improve the efficiency of the periodic surveys of the asphalt pavement condition, this study puts forward an intelligent method for automating the classification of pavement crack patterns. The new approach relies on image processing techniques and computational intelligence algorithms. The image processing techniques of Laplacian pyramid and projection integral are employed to extract numerical features from digital images. Least squares support vector machine (LSSVM) and Differential Flower Pollination (DFP) are the two computational intelligence algorithms that are employed to construct the crack classification model based on the extracted features. LSSVM is employed for data classification. In addition, the model construction phase of LSSVM requires a proper setting of the regularization and kernel function parameters. This study relies on DFP to fine-tune these two parameters of LSSVM. A dataset consisting of 500 image samples and five class labels of alligator crack, diagonal crack, longitudinal crack, no crack, and transverse crack has been collected to train and verify the established approach. The experimental results show that the Laplacian pyramid is really helpful to enhance the pavement images and reveal the crack patterns. Moreover, the hybridization of LSSVM and DFP, named as DFP-LSSVM, used with the Laplacian pyramid at the level 4 can help us to achieve the highest classification accuracy rate of 93.04%. Thus, the new hybrid approach of DFP-LSSVM is a promising tool to assist transportation agencies in the task of pavement condition surveying.


2018 ◽  
Vol 7 (2.7) ◽  
pp. 614 ◽  
Author(s):  
M Manoj krishna ◽  
M Neelima ◽  
M Harshali ◽  
M Venu Gopala Rao

The image classification is a classical problem of image processing, computer vision and machine learning fields. In this paper we study the image classification using deep learning. We use AlexNet architecture with convolutional neural networks for this purpose. Four test images are selected from the ImageNet database for the classification purpose. We cropped the images for various portion areas and conducted experiments. The results show the effectiveness of deep learning based image classification using AlexNet.  


2021 ◽  
Vol 8 (4) ◽  
pp. 787
Author(s):  
Moechammad Sarosa ◽  
Nailul Muna

<p class="Abstrak">Bencana alam merupakan suatu peristiwa yang dapat menyebabkan kerusakan dan menciptakan kekacuan. Bangunan yang runtuh dapat menyebabkan cidera dan kematian pada korban. Lokasi dan waktu kejadian bencana alam yang tidak dapat diprediksi oleh manusia berpotensi memakan korban yang tidak sedikit. Oleh karena itu, untuk mengurangi korban yang banyak, setelah kejadian bencana alam, pertama yang harus dilakukan yaitu menemukan dan menyelamatkan korban yang terjebak. Penanganan evakuasi yang cepat harus dilakukan tim SAR untuk membantu korban. Namun pada kenyataannya, tim SAR mengalami kendala selama proses evakuasi korban. Mulai dari sulitnya medan yang dijangkau hingga terbatasnya peralatan yang dibutuhkan. Pada penelitian ini sistem diimplementasikan untuk deteksi korban bencana alam yang bertujuan untuk membantu mengembangkan peralatan tim SAR untuk menemukan korban bencana alam yang berbasis pengolahan citra. Algoritma yang digunakan untuk mendeteksi ada atau tidaknya korban pada gambar adalah <em>You Only Look Once</em> (YOLO). Terdapat dua macam algoritma YOLO yang diimplementasikan pada sistem yaitu YOLOv3 dan YOLOv3 Tiny. Dari hasil pengujian yang telah dilakukan didapatkan <em>F1 Score</em> mencapai 95.3% saat menggunakan YOLOv3 dengan menggunakan 100 data latih dan 100 data uji.</p><p class="Abstrak"> </p><p class="Abstrak"><strong><em>Abstract</em></strong></p><p class="Abstrak"> </p><p class="Abstract"><em>Natural disasters are events that can cause damage and create havoc. Buildings that collapse and can cause injury and death to victims. Humans can not predict the location and timing of natural disasters. After the natural disaster, the first thing to do is find and save trapped victims. The handling of rapid evacuation must be done by the SAR team to help victims to reduce the amount of loss due to natural disasters. But in reality, the process of evacuating victims of natural disasters is still a lot of obstacles experienced by the SAR team. It was starting from the difficulty of the terrain that is reached to the limited equipment needed. In this study, a natural disaster victim detection system was designed using image processing that aims to help find victims in difficult or vulnerable locations when directly reached by humans. In this study, a detection system for victims of natural disasters was implemented which aims to help develop equipment for the SAR team to find victims of natural disasters based on image processing. The algorithm used is You Only Look Once (YOLO). In this study, two types of YOLO algorithms were compared, namely YOLOv3 and YOLOv3 Tiny. From the test results that have been obtained, the F1 Score reaches 95.3% when using YOLOv3 with 100 training data and 100 test data.</em></p>


2011 ◽  
Vol 331 ◽  
pp. 516-527
Author(s):  
Peng Zi Sun ◽  
Ji Peng Cao

This paper presents test reliablity of Uster AFIS for impurity test by calculating the Reliable Test Time (hereinafter referred to as RTT) and CV% of test results. The CV% value of test results of impurity-related parameters in card sliver obtained in 8 experiments totally with 313 different plans were calculated. By statistical analysis method, the reliable test time of AFIS for some impurity-related parameters was estimated. It is concluded that the impurity result obtained by 10-time tests with AFIS was inaccurate. The reasons for this are that the sample weight is too small, the impurity is unevenly distributed and the impurity in card sliver may have some loss in the manually-sampling process.


1994 ◽  
Vol 8 (5) ◽  
pp. 313-318 ◽  
Author(s):  
Cameron L. Jones ◽  
Greg T. Lonergan ◽  
David E. Mainwaring

2018 ◽  
Vol 2018 ◽  
pp. 1-17 ◽  
Author(s):  
Reyes Yam-Uicab ◽  
José López-Martínez ◽  
Erika Llanes-Castro ◽  
Lizzie Narvaez-Díaz ◽  
Joel Trejo-Sánchez

Detecting and counting elliptical objects are an interesting problem in digital image processing. There are real-world applications of this problem in various disciplines. Solving this problem is harder when there is occlusion among the elliptical objects, since in general these objects are considered as part of the bigger object (conglomerate). The solution to this problem focusses on the detection and segmentation of the precise number of occluded elliptical objects, while omitting all noninteresting objects. There are a variety of computational approximations that focus on this problem; however, such approximations are not accurate when there is occlusion. This paper presents an algorithm designed to solve this problem, specifically, to detect, segment, and count elliptical objects of a specific size when these are in occlusion with other objects within the conglomerate. Our algorithm deals with a time-consuming combinatorial process. To optimize the execution time of our algorithm, we implemented a parallel GPU version with CUDA-C, which experimentally improved the detection of occluded objects, as well as lowering processing times compared to the sequential version of the method. Comparative test results with another method featured in literature showed improved detection of objects in occlusion when using the proposed parallel method.


Sign in / Sign up

Export Citation Format

Share Document