scholarly journals Automatic Segmentation of Glottal Space from Video Images Based on Mathematical Morphology and the hough Transform

Author(s):  
Davod Aghlmandi ◽  
Karim Faez
2011 ◽  
Vol 383-390 ◽  
pp. 7607-7612 ◽  
Author(s):  
Jian Jun Chen ◽  
Yi Jun Gao ◽  
Zhao Ju Deng

In order to improve the accuracy and efficiency of automatic counting of microscopic cells, the method based on the Hough transform has been proposed. And the standard Hough transform has been improved using image gradient information. Compared with the traditional counting methods based on mathematical morphology and boundary tracking tags, the accuracy of the counting accuracy has been greatly improved. The results show the accuracy and efficiency of counting of the microscopic cells based on grads Hough transform is improved.


Author(s):  
Laura Gui ◽  
Radoslaw Lisowski ◽  
Tamara Faundez ◽  
Petra S. Huppi ◽  
Francois Lazeyras ◽  
...  

2013 ◽  
Vol 196 ◽  
pp. 206-211 ◽  
Author(s):  
Bogdan Żak ◽  
Stanisław Hożyń

This paper attempts to develop a segmentation algorithm applicable to the issue of recognizing objects in video images. The paper presents the steps of the algorithm with a discussion of techniques used in mathematical morphology, filtration and gradient methods. Also there were presented examples of the results of a verification researches.


Author(s):  
L. Tang ◽  
T. Deng ◽  
C. Ren

In the field of autonomous driving, semantic information of lanes is very important. This paper proposes a method of automatic detection of lanes and extraction of semantic information from onboard camera videos. The proposed method firstly detects the edges of lanes by the grayscale gradient direction, and improves the Probabilistic Hough transform to fit them; then, it uses the vanishing point principle to calculate the lane geometrical position, and uses lane characteristics to extract lane semantic information by the classification of decision trees. In the experiment, 216 road video images captured by a camera mounted onboard a moving vehicle were used to detect lanes and extract lane semantic information. The results show that the proposed method can accurately identify lane semantics from video images.


2021 ◽  
Vol 10 (16) ◽  
pp. 3589
Author(s):  
Yuhei Iwasa ◽  
Takuji Iwashita ◽  
Yuji Takeuchi ◽  
Hironao Ichikawa ◽  
Naoki Mita ◽  
...  

Background: Contrast-enhanced endoscopic ultrasound (CE-EUS) is useful for the differentiation of pancreatic tumors. Using deep learning for the segmentation and classification of pancreatic tumors might further improve the diagnostic capability of CE-EUS. Aims: The aim of this study was to evaluate the capability of deep learning for the automatic segmentation of pancreatic tumors on CE-EUS video images and possible factors affecting the automatic segmentation. Methods: This retrospective study included 100 patients who underwent CE-EUS for pancreatic tumors. The CE-EUS video images were converted from the originals to 90-second segments with six frames per second. Manual segmentation of pancreatic tumors from B-mode images was performed as ground truth. Automatic segmentation was performed using U-Net with 100 epochs and was evaluated with 4-fold cross-validation. The degree of respiratory movement (RM) and tumor boundary (TB) were divided into 3-degree intervals in each patient and evaluated as possible factors affecting the segmentation. The concordance rate was calculated using the intersection over union (IoU). Results: The median IoU of all cases was 0.77. The median IoUs in TB-1 (clear around), TB-2, and TB-3 (unclear more than half) were 0.80, 0.76, and 0.69, respectively. The IoU for TB-1 was significantly higher than that of TB-3 (p < 0.01). However, there was no significant difference between the degrees of RM. Conclusion: Automatic segmentation of pancreatic tumors using U-Net on CE-EUS video images showed a decent concordance rate. The concordance rate was lowered by an unclear TB but was not affected by RM.


2019 ◽  
Vol 66 (1) ◽  
pp. 25-34
Author(s):  
Bogdan Żak ◽  
Jerzy Garus

Abstract The search and detection of objects under water is carried out by groups of specialised divers. However, their time underwater and their ability to penetrate the depths are limited. For these reasons, the use of unmanned underwater vehicles equipped with technical observation equipment, including TV cameras, is becoming increasingly popular for these tasks. Video images from cameras installed on vehicles are used to identify and classify underwater objects. The process of recognition and identification of objects is tedious and difficult and requires the analysis of numerous sequences of images, and so it is desirable to automate this process. In response to these needs, this article presents the concept of identification of underwater objects based on visual images from an underwater body of water sent from an unmanned underwater vehicle to a base vessel. The methods of initial processing of the observed images from an underwater area as well as the method of searching for selected objects in these images and their identification with the use of the Hough transform will be described. Furthermore, the paper presents the results of the preliminary processing and identification of the observed images following a deconvolution operation.


Author(s):  
Kwang Baek Kim ◽  
Doo Heon Song ◽  
Young Woon Woo

Large bowel obstruction is less frewuent but often appears acute and needs emergent treatment. Erect abdominal radiograph is usually the first imaging study performed in patients suspected of having large bowel obstruction. However, that mordality suffers from operator subjectivity thus a fully automatic computer aied tool is necessary. In this paper, we peopose an automatic large bowel feature (air-fluid region) segmentation method based on Canny edge detection and Hough transform. In experiment, the proposed method was successful in finding target region from large bowel obstruction patients’ radiographic images in all 30 cases provided. Whilie limited only applicable to the large bowel obstruction cases, the proposed method is practically feasible in application.


Sign in / Sign up

Export Citation Format

Share Document