scholarly journals Spot Detection in Microscopy Images using Convolutional Neural Network with Sliding-Window Approach

Author(s):  
Matsilele Mabaso ◽  
Daniel Withey ◽  
Bhekisipho Twala
PAMM ◽  
2021 ◽  
Vol 20 (1) ◽  
Author(s):  
Pouyan Asgharzadeh ◽  
Annette I. Birkhold ◽  
Bugra Özdemir ◽  
Ralf Reski ◽  
Oliver Röhrle

Sensors ◽  
2020 ◽  
Vol 20 (17) ◽  
pp. 4996 ◽  
Author(s):  
Haneul Jeon ◽  
Sang Lae Kim ◽  
Soyeon Kim ◽  
Donghun Lee

Classification of foot–ground contact phases, as well as the swing phase is essential in biomechanics domains where lower-limb motion analysis is required; this analysis is used for lower-limb rehabilitation, walking gait analysis and improvement, and exoskeleton motion capture. In this study, sliding-window label overlapping of time-series wearable motion data in training dataset acquisition is proposed to accurately detect foot–ground contact phases, which are composed of 3 sub-phases as well as the swing phase, at a frequency of 100 Hz with a convolutional neural network (CNN) architecture. We not only succeeded in developing a real-time CNN model for learning and obtaining a test accuracy of 99.8% or higher, but also confirmed that its validation accuracy was close to 85%.


2020 ◽  
Vol 9 (4) ◽  
pp. 403-413
Author(s):  
Sandhopi ◽  
Lukman Zaman P.C.S.W ◽  
Yosi Kristian

Semakin berkembang motif ukiran, semakin beragam bentuk dan variasinya. Hal ini menyulitkan dalam menentukan suatu ukiran bermotif Jepara. Pada makalah ini, metode transfer learning dengan FC yang dikembangkan dimanfaatkan untuk mengidentifikasi motif khas Jepara pada suatu ukiran. Dataset dibedakan menjadi tiga color space, yaitu LUV, RGB, dan YcrCb. Selain itu, sliding window, non-max suppression, dan heat maps dimanfaatkan untuk proses penelusuran area objek ukiran dan pengidentifikasian motif Jepara. Hasil pengujian dari semua bobot menunjukkan bahwa Xception pada klasifikasi motif Jepara memiliki nilai akurasi tertinggi, yaitu 0,95, 0,95, dan 0,94 untuk masing-masing dataset color space LUV, RGB, dan YCrCb. Namun, ketika semua bobot model tersebut diterapkan pada sistem identifikasi motif Jepara, ResNet50 mampu mengungguli semua jaringan dengan nilai persentase identifikasi motif sebesar 84%, 79%, dan 80%, untuk masing-masing color space LUV, RGB, dan YCrCb. Hasil ini membuktikan bahwa sistem mampu membantu dalam proses menentukan suatu ukiran, termasuk ke dalam ukiran Jepara atau bukan, dengan mengidentifikasi motif-motif khas Jepara yang terdapat dalam ukiran.


Symmetry ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 98
Author(s):  
Vladimir Sergeevich Bochkov ◽  
Liliya Yurievna Kataeva

This article describes an AI-based solution to multiclass fire segmentation. The flame contours are divided into red, yellow, and orange areas. This separation is necessary to identify the hottest regions for flame suppression. Flame objects can have a wide variety of shapes (convex and non-convex). In that case, the segmentation task is more applicable than object detection because the center of the fire is much more accurate and reliable information than the center of the bounding box and, therefore, can be used by robotics systems for aiming. The UNet model is used as a baseline for the initial solution because it is the best open-source convolutional neural network. There is no available open dataset for multiclass fire segmentation. Hence, a custom dataset was developed and used in the current study, including 6250 samples from 36 videos. We compared the trained UNet models with several configurations of input data. The first comparison is shown between the calculation schemes of fitting the frame to one window and obtaining non-intersected areas of sliding window over the input image. Secondarily, we chose the best main metric of the loss function (soft Dice and Jaccard). We addressed the problem of detecting flame regions at the boundaries of non-intersected regions, and introduced new combinational methods of obtaining output signal based on weighted summarization and Gaussian mixtures of half-intersected areas as a solution. In the final section, we present UUNet-concatenative and wUUNet models that demonstrate significant improvements in accuracy and are considered to be state-of-the-art. All models use the original UNet-backbone at the encoder layers (i.e., VGG16) to demonstrate the superiority of the proposed architectures. The results can be applied to many robotic firefighting systems.


2021 ◽  
Author(s):  
Golnaz Moallem ◽  
Adity A. Pore ◽  
Anirudh Gangadhar ◽  
Hamed Sari-Sarraf ◽  
Siva A Vanapalli

Significance: Circulating tumor cells (CTCs) are important biomarkers for cancer management. Isolated CTCs from blood are stained to detect and enumerate CTCs. However, the staining process is laborious and moreover makes CTCs unsuitable for drug testing and molecular characterization. Aim: The goal is to develop and test deep learning (DL) approaches to detect unstained breast cancer cells in bright field microscopy images that contain white blood cells (WBCs). Approach: We tested two convolutional neural network (CNN) approaches. The first approach allows investigation of the prominent features extracted by CNN to discriminate cancer cells from WBCs. The second approach is based on Faster Region-based Convolutional Neural Network (Faster R-CNN). Results: Both approaches detected cancer cells with high sensitivity and specificity with the Faster R-CNN being more efficient and suitable for deployment. The distinctive feature used by the CNN used to discriminate is cell size, however, in the absence of size difference, the CNN was found to be capable of learning other features. The Faster R-CNN was found to be robust with respect to intensity and contrast image transformations. Conclusions: CNN-based deep learning approaches could be potentially applied to detect patient-derived CTCs from images of blood samples.


Sign in / Sign up

Export Citation Format

Share Document