background segmentation
Recently Published Documents


TOTAL DOCUMENTS

156
(FIVE YEARS 32)

H-INDEX

16
(FIVE YEARS 2)

Scanning ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Fatih Veysel Nurçin ◽  
Elbrus Imanov

Manual counting and evaluation of red blood cells with the presence of malaria parasites is a tiresome, time-consuming process that can be altered by environmental conditions and human error. Many algorithms were presented to segment red blood cells for subsequent parasitemia evaluation by machine learning algorithms. However, the segmentation of overlapping red blood cells always has been a challenge. Marker-controlled watershed segmentation is one of the methods that was implemented to separate overlapping red blood cells. However, a high number of overlapped red blood cells were still an issue. We propose a novel approach to improve the segmentation efficiency of marker-controlled watershed segmentation. Local minimum histogram background segmentation with a selective hole filling algorithm was introduced to improve segmentation efficiency of marker-controlled watershed segmentation on a high number of overlapping red blood cells. The local minimum was selected on the smoothed histogram for background segmentation. The combination of selective filling, convex hull, and Hough circle detection algorithms was utilized for the intact segmentation of red blood cells. The markers were computed from the resulted mask, and finally, marker-controlled watershed segmentation was applied to separate overlapping red blood cells. As a result, the proposed algorithm achieved higher background segmentation accuracy compared to popular background segmentation algorithms, and the inclusion of corner details improved watershed segmentation efficiency.


2021 ◽  
Vol 15 ◽  
Author(s):  
Carlotta Martelli ◽  
Douglas Anthony Storace

Olfactory stimuli are encountered across a wide range of odor concentrations in natural environments. Defining the neural computations that support concentration invariant odor perception, odor discrimination, and odor-background segmentation across a wide range of stimulus intensities remains an open question in the field. In principle, adaptation could allow the olfactory system to adjust sensory representations to the current stimulus conditions, a well-known process in other sensory systems. However, surprisingly little is known about how adaptation changes olfactory representations and affects perception. Here we review the current understanding of how adaptation impacts processing in the first two stages of the vertebrate olfactory system, olfactory receptor neurons (ORNs), and mitral/tufted cells.


Author(s):  
Madhuri Devi Chodey ◽  
C Noorullah Shariff

Pest detection and identification of diseases in agricultural crops is essential to ensure good product since it is the major challenge in the field of agriculture. Therefore, effective measures should be taken to fight the infestation to minimise the use of pesticides. The techniques of image analysis are extensively applied to agricultural science that provides maximum protection to crops. This might obviously lead to better crop management and production. However, automatic pest detection with machine learning technology is still in the infant stage. Hence, the video processing-based pest detection framework is constructed in this work by following six major phases, viz. (a) Video Frame Acquisition, (b) Pre-processing, (c) Object Tracking, (d) Foreground and Background Segmentation, (e) Feature Extraction, and (f) Classification. Initially, the moving frames of videos are pre-processed, and the movement of the object is tracked with the aid of the foreground and background segmentation approach via K-Means clustering. From the segmented image, a new feature evaluation termed as Distributed Intensity-based LBP features (DI-LBP) along with edges and colour are extracted. Further, the features are subjected to a classification process, where an optimised Neural Network (NN) is used. As a novelty, the training of NN will be carried out using a new Dragonfly with New Levy Update (D-NU) algorithm via updating the weight. Finally, the performance of the proposed model is analysed over other conventional models with respect to certain performance measures for both video and image datasets.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Limin Qi

At present, the industry research of volleyball technology is relatively in-depth, and the analysis of the muscle strength characteristics and coordination of the jumping ball is less, which is not conducive to the control of technical movements. This study used a wireless portable surface EMG tester (16 lines) to analyze the EMG of the main muscle groups in athletes’ volleyball and conducted a video synchronization test method to find the position of the human body. Therefore, a background-based frame difference method is proposed to detect the position and obtain the precise position of the human body. Experiments show that the background-based three-frame difference method effectively eliminates the “hole” effect of the original three-frame difference method and provides an accurate and complete framework for identifying the human body. Adjust the recognition frame according to the proportion of the human body in the image, and use the predefined parameters of the severe frame to perform forward/volleyball background segmentation. The novelty of this document lies in the completion of the complete human body placement of the above three tasks, precapture/background segmentation, and an improved human body position estimation algorithm to extract the human body pose from the video. First, locate the human body in each frame of the video, and then, perform the process of estimating the position of the graphic model based on the color and texture of the unit. After recognizing the gesture of each image in the video, the recognition result will be displayed. Experiments show that after detecting the position of the human body, the predefined frame setting process of the tomb is carried out in two steps, which improves the automation of the human body image detection algorithm, effectively extracts the human motion video, and increases the motion capture rate by more than 30%, to provide a useful reference for the improvement of college volleyball players’ movement skills and training competitions.


Water ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 1304
Author(s):  
Triantafyllia-Maria Perivolioti ◽  
Michal Tušer ◽  
Dimitrios Terzopoulos ◽  
Stefanos P. Sgardelis ◽  
Ioannis Antoniou

DIDSON acoustic cameras provide a way to collect temporally dense, high-resolution imaging data, similar to videos. Detection of fish targets on those videos takes place in a manual or semi-automated manner, typically assisted by specialised software. Exploiting the visual nature of the recordings, tools and techniques from the field of computer vision can be applied in order to facilitate the relatively involved workflows. Furthermore, machine learning techniques can be used to minimise user intervention and optimise for specific detection and tracking scenarios. This study explored the feasibility of combining optical flow with a genetic algorithm, with the aim of automating motion detection and optimising target-to-background segmentation (masking) under custom criteria, expressed in terms of the result. A 1000-frame video sequence sample with sparse, smoothly moving targets, reconstructed from a 125 s DIDSON recording, was analysed under two distinct scenarios, and an elementary detection method was used to assess and compare the resulting foreground (target) masks. The results indicate a high sensitivity to motion, as well as to the visual characteristics of targets, with the resulting foreground masks generally capturing fish targets on the majority of frames, potentially with small gaps of undetected targets, lasting for no more than a few frames. Despite the high computational overhead, implementation refinements could increase computational feasibility, while an extension of the algorithms, in order to include the steps of target detection and tracking, could further improve automation and potentially provide an efficient tool for the automated preliminary assessment of voluminous DIDSON data recordings.


2021 ◽  
Author(s):  
Wendy Marcela Fong Amarís ◽  
Carol Viviana Martinez Luna ◽  
Liliana Jazmín Cortés Cortés ◽  
Daniel Ricardo Suárez Venegas

Abstract Background: The World Health Organization (WHO) provides protocols for the diagnosis of malaria. One of them is related to the staining process of blood samples to guarantee the correct parasite visualization. Ensuring the quality of the staining procedure on thick blood smears (TBS) is a dicult task, especially in rural centers, where there are factors that can a ect the smear quality (e.g. types of reagents employed, place of sample preparation, among others). This work presents an analysis of an image-based approach to evaluate the coloration quality of the staining process of TBS used for malaria diagnosis. Methods: According to the WHO, there are di erent coloration quality descriptors of smears. Among those, the background color is one of the best indicators of how well the staining process was conducted. An image database with 420 images (corresponding to 42 TBS samples) was created for analyzing and testing image-based algorithms to detect the quality of the coloration of TBS. Background segmentation techniques were explored (based on RGB and HSV color spaces) to separate the background and foreground (leukocytes, platelets, parasites) information. Then, di erent features (PCA, correlation, Histograms, variance) were explored as image criteria of coloration quality on the extracted background information; and evaluated according to their capability to classify images as with Good or Bad coloration quality from TBS. Results: For background segmentation, a thresholding-based approach in the SV components of the HSV color space was selected. It provided robustness separating the background information independently of its coloration quality. On the other hand, as image criteria of coloration quality, among the 19 feature vectors explored, the best one corresponds to the 15-bins histogram of the Hue component with classi cation rates of > 97%. Conclusions: An analysis of an image-based approach to describe the coloration quality of TBS was presented. It was demonstrated that if a robust background segmentation is conducted, the histogram of the H component from the HSV color space is the best feature vector to discriminate the coloration quality of the smears. These results are the baseline for automating the estimation of the coloration quality, which is a key component for automating malaria diagnosis of TBS, that has not been studied before.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
YuFan Cai ◽  
YanYan Zhang ◽  
ChengSheng Pan

The difficulty of lane detection lies in the imbalance of the number of target pixels and background pixels. The sparse target distribution misleads the neural network to pay more attention to background segmentation in order to obtain a better loss convergence result. This makes it difficult for some models to detect lane line pixels and leads to the training fail (unable to output useful lane information). Increasing receptive field properly can enlarge the sphere of action between pixels, so as to restrain this trouble. Moreover, the interference information and noise existing in the real environment increase the difficulty of lane classification, such as vehicle occlusion, car glass reflection, and tree shadow. In this paper, we do think that the features obtained by the reasonable combination of receptive fields can help avoid oversegmentation of the image, so that most of the interference information can be filtered out. Based on this idea, Adaptive Receptive Field Net (ARFNet) is proposed to solve the problem of receptive field combination with the help of multireceptive field aggregation layers and scoring mechanism. This paper explains the working principle of ARFNet and analyzes several results of experiments, which are carried out to adjust network structure parameters in order to get better effects in the CuLane dataset testing.


Author(s):  
Wei Chen ◽  
Cenyu He ◽  
Chunlin Ji ◽  
Meiying Zhang ◽  
Siyu Chen

AbstractConventional algorithms fail to obtain satisfactory background segmentation results for underwater images. In this study, an improved K-means algorithm was developed for underwater image background segmentation to address the issue of improper K value determination and minimize the impact of initial centroid position of grayscale image during the gray level quantization of the conventional K-means algorithm. A total of 100 underwater images taken by an underwater robot were sampled to test the aforementioned algorithm in respect of background segmentation validity and time cost. The K value and initial centroid position of grayscale image were optimized. The results were compared to the other three existing algorithms, including the conventional K-means algorithm, the improved Otsu algorithm, and the Canny operator edge extraction method. The experimental results showed that the improved K-means underwater background segmentation algorithm could effectively segment the background of underwater images with a low color cast, low contrast, and blurred edges. Although its cost in time was higher than that of the other three algorithms, it none the less proved more efficient than the time-consuming manual segmentation method. The algorithm proposed in this paper could potentially be used in underwater environments for underwater background segmentation.


Author(s):  
Maryam Hamad ◽  
Caroline Conti ◽  
Ana Maria de Almeida ◽  
Paulo Nunes ◽  
Luis Ducla Soares

Sign in / Sign up

Export Citation Format

Share Document