Semantic segmentation approach for tunnel roads’ analysis

Author(s):  
Arcadi Llanza ◽  
Assan Sanogo ◽  
Marouan Khata ◽  
Alami Khalil ◽  
Nadiya Shvai ◽  
...  
2019 ◽  
Vol 10 (3) ◽  
pp. 2426-2432 ◽  
Author(s):  
Arjun ◽  
Kanchana V

spinal cord plays an important role in human life. In our work, we are using digital image processing technique, the interior part of the human body can be analyzed using MRI, CT and X-RAY etc. Medical image processing technique is extensively used in medical field. In here we are using MRI image to perform our work In our proposed work, we are finding degenerative disease from spinal cord image. In our work first, we are preprocessing the MRI image and locate the degenerative part of the spinal cord, finding the degenerative part using various segmentation approach after that classifying degenerative disease or normal spinal cord using various classification algorithm. For segmentation, we are using an efficient semantic segmentation approach


2018 ◽  
Vol 7 (2.5) ◽  
pp. 1
Author(s):  
Khalil Khan ◽  
Nasir Ahmad ◽  
Irfan Uddin ◽  
Muhammad Ehsan Mazhar ◽  
Rehan Ullah Khan

Background and objective: A novel face parsing method is proposed in this paper which partition facial image into six semantic classes. Unlike previous approaches which segmented a facial image into three or four classes, we extended the class labels to six. Materials and Methods: A data-set of 464 images taken from FEI, MIT-CBCL, Pointing’04 and SiblingsDB databases was annotated. A discriminative model was trained by extracting features from squared patches. The built model was tested on two different semantic segmentation approaches – pixel-based and super-pixel-based semantic segmentation (PB_SS and SPB_SS).Results: A pixel labeling accuracy (PLA) of 94.68% and 90.35% was obtained with PB_SS and SPB_SS methods respectively on frontal images. Conclusions: A new method for face parts parsing was proposed which efficiently segmented a facial image into its constitute parts.


Author(s):  
A. Adam ◽  
L. Grammatikopoulos ◽  
G. Karras ◽  
E. Protopapadakis ◽  
K. Karantzalos

Abstract. 3D semantic segmentation is the joint task of partitioning a point cloud into semantically consistent 3D regions and assigning them to a semantic class/label. While the traditional approaches for 3D semantic segmentation typically rely only on structural information of the objects (i.e. object geometry and shape), the last years many techniques combining both visual and geometric features have emerged, taking advantage of the progress in SfM/MVS algorithms that reconstruct point clouds from multiple overlapping images. Our work describes a hybrid methodology for 3D semantic segmentation, relying both on 2D and 3D space and aiming at exploring whether image selection is critical as regards the accuracy of 3D semantic segmentation of point clouds. Experimental results are demonstrated on a free online dataset depicting city blocks around Paris. The experimental procedure not only validates that hybrid features (geometric and visual) can achieve a more accurate semantic segmentation, but also demonstrates the importance of the most appropriate view for the 2D feature extraction.


Author(s):  
M. Chizhova ◽  
A. Gurianov ◽  
M. Hess ◽  
T. Luhmann ◽  
A. Brunn ◽  
...  

For the interpretation of point clouds, the semantic definition of extracted segments from point clouds or images is a common problem. Usually, the semantic of geometrical pre-segmented point cloud elements are determined using probabilistic networks and scene databases. The proposed semantic segmentation method is based on the psychological human interpretation of geometric objects, especially on fundamental rules of primary comprehension. Starting from these rules the buildings could be quite well and simply classified by a human operator (e.g. architect) into different building types and structural elements (dome, nave, transept etc.), including particular building parts which are visually detected. The key part of the procedure is a novel method based on hashing where point cloud projections are transformed into binary pixel representations. A segmentation approach released on the example of classical Orthodox churches is suitable for other buildings and objects characterized through a particular typology in its construction (e.g. industrial objects in standardized enviroments with strict component design allowing clear semantic modelling).


Author(s):  
Zhicheng Guo ◽  
Cheng Ding ◽  
Xiao Hu ◽  
Cynthia Rudin

Abstract Objective. Wearable devices equipped with plethysmography (PPG) sensors provided a low-cost, long-term solution to early diagnosis and continuous screening of heart conditions. However PPG signals collected from such devices often suffer from corruption caused by artifacts. The objective of this study is to develop an effective supervised algorithm to locate the regions of artifacts within PPG signals. Approach. We treat artifact detection as a 1D segmentation problem. We solve it via a novel combination of an active-contour-based loss and an adapted U-Net architecture. The proposed algorithm was trained on the PPG DaLiA training set, and further evaluated on the PPG DaLiA testing set, WESAD dataset and TROIKA dataset. Main results. We evaluated with the DICE score, a well-established metric for segmentation accuracy evaluation in the field of computer vision. The proposed method outperforms baseline methods on all three datasets by a large margin (≈ 7 percentage points above the next best method). On the PPG DaLiA testing set, WESAD dataset and TROIKA dataset, the proposed method achieved 0.8734±0.0018, 0.9114±0.0033 and 0.8050±0.0116 respectively. The next best method only achieved 0.8068±0.0014, 0.8446±0.0013 and 0.7247±0.0050. Significance. The proposed method is able to pinpoint exact locations of artifacts with high precision; in the past, we had only a binary classification of whether a PPG signal has good or poor quality. This more nuanced information will be critical to further inform the design of algorithms to detect cardiac arrhythmia.


2020 ◽  
Vol 10 (17) ◽  
pp. 5894
Author(s):  
Hamidullah Binol ◽  
Aaron C. Moberly ◽  
Muhammad Khalid Khan Niazi ◽  
Garth Essig ◽  
Jay Shah ◽  
...  

Background and Objective: the aim of this study is to develop and validate an automated image segmentation-based frame selection and stitching framework to create enhanced composite images from otoscope videos. The proposed framework, called SelectStitch, is useful for classifying eardrum abnormalities using a single composite image instead of the entire raw otoscope video dataset. Methods: SelectStitch consists of a convolutional neural network (CNN) based semantic segmentation approach to detect the eardrum in each frame of the otoscope video, and a stitching engine to generate a high-quality composite image from the detected eardrum regions. In this study, we utilize two separate datasets: the first one has 36 otoscope videos that were used to train a semantic segmentation model, and the second one, containing 100 videos, which was used to test the proposed method. Cases from both adult and pediatric patients were used in this study. A configuration of 4-levels depth U-Net architecture was trained to automatically find eardrum regions in each otoscope video frame from the first dataset. After the segmentation, we automatically selected meaningful frames from otoscope videos by using a pre-defined threshold, i.e., it should contain at least an eardrum region of 20% of a frame size. We have generated 100 composite images from the test dataset. Three ear, nose, and throat (ENT) specialists (ENT-I, ENT-II, ENT-III) compared in two rounds the composite images produced by SelectStitch against the composite images that were generated by the base processes, i.e., stitching all the frames from the same video data, in terms of their diagnostic capabilities. Results: In the first round of the study, ENT-I, ENT-II, ENT-III graded improvement for 58, 57, and 71 composite images out of 100, respectively, for SelectStitch over the base composite, reflecting greater diagnostic capabilities. In the repeat assessment, these numbers were 56, 56, and 64, respectively. We observed that only 6%, 3%, and 3% of the cases received a lesser score than the base composite images, respectively, for ENT-I, ENT-II, and ENT-III in Round-1, and 4%, 0%, and 2% of the cases in Round-2. Conclusions: We conclude that the frame selection and stitching will increase the probability of detecting a lesion even if it appears in a few frames.


2020 ◽  
Vol 2020 ◽  
pp. 1-13
Author(s):  
Muhammad Shahzad ◽  
Arif Iqbal Umar ◽  
Muazzam A. Khan ◽  
Syed Hamad Shirazi ◽  
Zakir Khan ◽  
...  

Previous works on segmentation of SEM (scanning electron microscope) blood cell image ignore the semantic segmentation approach of whole-slide blood cell segmentation. In the proposed work, we address the problem of whole-slide blood cell segmentation using the semantic segmentation approach. We design a novel convolutional encoder-decoder framework along with VGG-16 as the pixel-level feature extraction model. The proposed framework comprises 3 main steps: First, all the original images along with manually generated ground truth masks of each blood cell type are passed through the preprocessing stage. In the preprocessing stage, pixel-level labeling, RGB to grayscale conversion of masked image and pixel fusing, and unity mask generation are performed. After that, VGG16 is loaded into the system, which acts as a pretrained pixel-level feature extraction model. In the third step, the training process is initiated on the proposed model. We have evaluated our network performance on three evaluation metrics. We obtained outstanding results with respect to classwise, as well as global and mean accuracies. Our system achieved classwise accuracies of 97.45%, 93.34%, and 85.11% for RBCs, WBCs, and platelets, respectively, while global and mean accuracies remain 97.18% and 91.96%, respectively.


Sign in / Sign up

Export Citation Format

Share Document