segmentation task
Recently Published Documents


TOTAL DOCUMENTS

221
(FIVE YEARS 160)

H-INDEX

15
(FIVE YEARS 6)

2022 ◽  
pp. 1-49
Author(s):  
Tiberiu Teşileanu ◽  
Siavash Golkar ◽  
Samaneh Nasiri ◽  
Anirvan M. Sengupta ◽  
Dmitri B. Chklovskii

Abstract The brain must extract behaviorally relevant latent variables from the signals streamed by the sensory organs. Such latent variables are often encoded in the dynamics that generated the signal rather than in the specific realization of the waveform. Therefore, one problem faced by the brain is to segment time series based on underlying dynamics. We present two algorithms for performing this segmentation task that are biologically plausible, which we define as acting in a streaming setting and all learning rules being local. One algorithm is model based and can be derived from an optimization problem involving a mixture of autoregressive processes. This algorithm relies on feedback in the form of a prediction error and can also be used for forecasting future samples. In some brain regions, such as the retina, the feedback connections necessary to use the prediction error for learning are absent. For this case, we propose a second, model-free algorithm that uses a running estimate of the autocorrelation structure of the signal to perform the segmentation. We show that both algorithms do well when tasked with segmenting signals drawn from autoregressive models with piecewise-constant parameters. In particular, the segmentation accuracy is similar to that obtained from oracle-like methods in which the ground-truth parameters of the autoregressive models are known. We also test our methods on data sets generated by alternating snippets of voice recordings. We provide implementations of our algorithms at https://github.com/ttesileanu/bio-time-series.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 283
Author(s):  
Xiaoyuan Yu ◽  
Suigu Tang ◽  
Chak Fong Cheang ◽  
Hon Ho Yu ◽  
I Cheong Choi

The automatic analysis of endoscopic images to assist endoscopists in accurately identifying the types and locations of esophageal lesions remains a challenge. In this paper, we propose a novel multi-task deep learning model for automatic diagnosis, which does not simply replace the role of endoscopists in decision making, because endoscopists are expected to correct the false results predicted by the diagnosis system if more supporting information is provided. In order to help endoscopists improve the diagnosis accuracy in identifying the types of lesions, an image retrieval module is added in the classification task to provide an additional confidence level of the predicted types of esophageal lesions. In addition, a mutual attention module is added in the segmentation task to improve its performance in determining the locations of esophageal lesions. The proposed model is evaluated and compared with other deep learning models using a dataset of 1003 endoscopic images, including 290 esophageal cancer, 473 esophagitis, and 240 normal. The experimental results show the promising performance of our model with a high accuracy of 96.76% for the classification and a Dice coefficient of 82.47% for the segmentation. Consequently, the proposed multi-task deep learning model can be an effective tool to help endoscopists in judging esophageal lesions.


2021 ◽  
Vol 7 ◽  
pp. e783
Author(s):  
Bin Lin ◽  
Houcheng Su ◽  
Danyang Li ◽  
Ao Feng ◽  
Hongxiang Li ◽  
...  

Due to memory and computing resources limitations, deploying convolutional neural networks on embedded and mobile devices is challenging. However, the redundant use of the 1 × 1 convolution in traditional light-weight networks, such as MobileNetV1, has increased the computing time. By utilizing the 1 × 1 convolution that plays a vital role in extracting local features more effectively, a new lightweight network, named PlaneNet, is introduced. PlaneNet can improve the accuracy and reduce the numbers of parameters and multiply-accumulate operations (Madds). Our model is evaluated on classification and semantic segmentation tasks. In the classification tasks, the CIFAR-10, Caltech-101, and ImageNet2012 datasets are used. In the semantic segmentation task, PlaneNet is tested on the VOC2012 datasets. The experimental results demonstrate that PlaneNet (74.48%) can obtain higher accuracy than MobileNetV3-Large (73.99%) and GhostNet (72.87%) and achieves state-of-the-art performance with fewer network parameters in both tasks. In addition, compared with the existing models, it has reached the practical application level on mobile devices. The code of PlaneNet on GitHub: https://github.com/LinB203/planenet.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Dalei Jiang ◽  
Yin Wang ◽  
Feng Zhou ◽  
Hongtao Ma ◽  
Wenting Zhang ◽  
...  

Abstract Background Image segmentation is a difficult and classic problem. It has a wide range of applications, one of which is skin lesion segmentation. Numerous researchers have made great efforts to tackle the problem, yet there is still no universal method in various application domains. Results We propose a novel approach that combines a deep convolutional neural network with a grabcut-like user interaction to tackle the interactive skin lesion segmentation problem. Slightly deviating from grabcut user interaction, our method uses boxes and clicks. In addition, contrary to existing interactive segmentation algorithms that combine the initial segmentation task with the following refinement task, we explicitly separate these tasks by designing individual sub-networks. One network is SBox-Net, and the other is Click-Net. SBox-Net is a full-fledged segmentation network that is built upon a pre-trained, state-of-the-art segmentation model, while Click-Net is a simple yet powerful network that combines feature maps extracted from SBox-Net and user clicks to residually refine the mistakes made by SBox-Net. Extensive experiments on two public datasets, PH2 and ISIC, confirm the effectiveness of our approach. Conclusions We present an interactive two-stage pipeline method for skin lesion segmentation, which was demonstrated to be effective in comprehensive experiments.


2021 ◽  
Vol 12 ◽  
Author(s):  
Waltraud Stadler ◽  
Veit S. Kraft ◽  
Roee Be’er ◽  
Joachim Hermsdörfer ◽  
Masami Ishihara

How do athletes represent actions from their sport? How are these representations structured and which knowledge is shared among experts in the same discipline? To address these questions, the event segmentation task was used. Experts in Taekwondo and novices indicated how they would subjectively split videos of Taekwondo form sequences into meaningful units. In previous research, this procedure was shown to unveil the structure of internal action representations and to be affected by sensorimotor knowledge. Without specific instructions on the grain size of segmentation, experts tended to integrate over longer episodes which resulted in a lower number of single units. Moreover, in accordance with studies in figure-skating and basketball, we expected higher agreement among experts on where to place segmentation marks, i.e., boundaries. In line with this hypothesis, significantly more overlap of boundaries was found within the expert group as compared to the control group. This was observed even though the interindividual differences in the selected grain size were huge and expertise had no systematic influence here. The absence of obvious goals or objects to structure Taekwondo forms underlines the importance of shared expert knowledge. Further, experts might have benefited from sensorimotor skills which allowed to simulate the observed actions more precisely. Both aspects may explain stronger agreement among experts even in unfamiliar Taekwondo forms. These interpretations are descriptively supported by the participants’ statements about features which guided segmentation and by an overlap of the group’s agreed boundaries with those of an experienced referee. The study shows that action segmentation can be used to provide insights into structure and content of action representations specific to experts. The mechanisms underlying shared knowledge among Taekwondoists and among experts in general are discussed on the background of current theoretic frameworks.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi227-vi227
Author(s):  
Antonio Di Ieva ◽  
Carlo Russo ◽  
Abdulla Al Suman ◽  
Sidong Liu

Abstract Computational Neurosurgery is a novel field where computational modeling and artificial intelligence (AI) are used to analyze diseases of neurosurgical interest. Our aim is to apply AI models to brain tumor (BT) images to a) automatically segment BTs on pre-operative MRI, b) predict the genetic subtype of glioma on intra- and post-operative histological specimens; and c) predict the extent of resection according to connectomics data. For the segmentation task, we used 510 BT images to train a deep learning (DL) model for automatic segmentation of the tumors’ edges and comparison of the AI-generated masks versus experts’ consensus (quantified by means of the dice score). For the histopathology task, we digitalized 266 hematoxylin/eosin slides of gliomas (including 130 IDH-wildtype and 136 IDH-mutant) and applied a DL architecture to predict the IDH genetic status, then validated by immunohistochemistry and genetic sequencing. The datasets were also augmented by generating synthetic glioma images by means of a Generative Adversarial Network methodology. The resection of 10 BTs was also customized according to connectomics data. In the segmentation experiment, we reached a dice score of ~0.9 (out of 1.0), while further demonstrating that only the T1, T1 after gadolinium, and FLAIR sequences are necessary for accurate automatic segmentation. In the histopathology task, we were able to predict the genetic status with accuracy between 76% and 95% using the DL model. The machine learning-based connectome analysis allowed us to perform safe supramaximal resection. We have shown the robustness of applying AI methodology or the automatic segmentation of BTs in MR imaging. Moreover, we have also shown that AI can be used to predict the genetic status, specifically, IDH, in histopathology images of gliomas. Our results support the use of AI in the clinical scenario for a fast and objective computerized characterization of patients affected by BTs.


2021 ◽  
Vol 1 (1) ◽  
pp. 50-52
Author(s):  
Bo Dong ◽  
Wenhai Wang ◽  
Jinpeng Li

We present our solutions to the MedAI for all three tasks: polyp segmentation task, instrument segmentation task, and transparency task. We use the same framework to process the two segmentation tasks of polyps and instruments. The key improvement over last year is new state-of-the-art vision architectures, especially transformers which significantly outperform ConvNets for the medical image segmentation tasks. Our solution consists of multiple segmentation models, and each model uses a transformer as the backbone network. we get the best IoU score of 0.915 on the instrument segmentation task and 0.836 on polyp segmentation task after submitting. Meanwhile, we provide complete solutions in https://github.com/dongbo811/MedAI-2021.


2021 ◽  
Vol 1 (1) ◽  
pp. 14-16
Author(s):  
Debapriya Banik ◽  
Kaushiki Roy ◽  
Debotosh Bhattacharjee

This paper addresses the Instrument Segmentation Task, a subtask for the “MedAI: Transparency in Medical Image Segmentation” challenge. To accomplish the subtask, our team “Med_Seg_JU” has proposed a deep learning-based framework, namely “EM-Net: An Efficient M-Net for segmentation of surgical instruments in colonoscopy frames”. The proposed framework is inspired by the M-Net architecture. In this architecture, we have incorporated the EfficientNet B3 module with U-Net as the backbone. Our proposed method obtained a JC of 0.8205, DSC of 0.8632, PRE of 0.8464, REC of 0.9005, F1 of 0.8632, and ACC of 0.9799 as evaluated by the challenge organizers on a separate test dataset. These results justify the efficacy of our proposed method in the segmentation of the surgical instruments.


2021 ◽  
Vol 1 (1) ◽  
pp. 11-13
Author(s):  
Ayush Somani ◽  
Divij Singh ◽  
Dilip Prasad ◽  
Alexander Horsch

We often locate ourselves in a trade-off situation between what is predicted and understanding why the predictive modeling made such a prediction. This high-risk medical segmentation task is no different where we try to interpret how well has the model learned from the image features irrespective of its accuracy. We propose image-specific fine-tuning to make a deep learning model adaptive to specific medical imaging tasks. Experimental results reveal that: a) proposed model is more robust to segment previously unseen objects (negative test dataset) than state-of-the-art CNNs; b) image-specific fine-tuning with the proposed heuristics significantly enhances segmentation accuracy; and c) our model leads to accurate results with fewer user interactions and less user time than conventional interactive segmentation methods. The model successfully classified ’no polyp’ or ’no instruments’ in the image irrespective of the absence of negative data in training samples from Kvasir-seg and Kvasir-Instrument datasets.


2021 ◽  
Vol 1 (1) ◽  
pp. 44-46
Author(s):  
Ashar Mirza ◽  
Rishav Kumar Rajak

In this paper, we present a UNet architecture-based deep learning method that is used to segment polyp and instruments from the image data set provided in the MedAI Challenge2021. For the polyp segmentation task, we developed a UNet based algorithm for segmenting polyps in images taken from endoscopies. The main focus of this task is to achieve high segmentation metrics on the supplied test dataset. Similarly for the polyp segmentation task, in the instrument segmentation task, we have developed UNet based algorithms for segmenting instruments present in colonoscopy videos.


Sign in / Sign up

Export Citation Format

Share Document