scholarly journals Mobile-Aware Deep Learning Algorithms for Malaria Parasites and White Blood Cells Localization in Thick Blood Smears

Algorithms ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 17
Author(s):  
Rose Nakasi ◽  
Ernest Mwebaze ◽  
Aminah Zawedde

Effective determination of malaria parasitemia is paramount in aiding clinicians to accurately estimate the severity of malaria and guide the response for quality treatment. Microscopy by thick smear blood films is the conventional method for malaria parasitemia determination. Despite its edge over other existing methods of malaria parasitemia determination, it has been critiqued for being laborious, time consuming and equally requires expert knowledge for an efficient manual quantification of the parasitemia. This pauses a big challenge to most low developing countries as they are not only highly endemic but equally low resourced in terms of technical personnel in medical laboratories This study presents an end-to-end deep learning approach to automate the localization and count of P.falciparum parasites and White Blood Cells (WBCs) for effective parasitemia determination. The method involved building computer vision models on a dataset of annotated thick blood smear images. These computer vision models were built based on pre-trained deep learning models including Faster Regional Convolutional Neural Network (Faster R-CNN) and Single Shot Multibox Detector (SSD) models that help process the obtained digital images. To improve model performance due to a limited dataset, data augmentation was applied. Results from the evaluation of our approach showed that it reliably detected and returned a count of parasites and WBCs with good precision and recall. A strong correlation was observed between our model-generated counts and the manual counts done by microscopy experts (posting a spear man correlation of ρ = 0.998 for parasites and ρ = 0.987 for WBCs). Additionally, our proposed SSD model was quantized and deployed on a mobile smartphone-based inference app to detect malaria parasites and WBCs in situ. Our proposed method can be applied to support malaria diagnostics in settings with few trained Microscopy Experts yet constrained with large volume of patients to diagnose.

2021 ◽  
Vol 9 (Suppl 3) ◽  
pp. A874-A874
Author(s):  
David Soong ◽  
David Soong ◽  
David Soong ◽  
Anantharaman Muthuswamy ◽  
Clifton Drew ◽  
...  

BackgroundRecent advances in machine learning and digital pathology have enabled a variety of applications including predicting tumor grade and genetic subtypes, quantifying the tumor microenvironment (TME), and identifying prognostic morphological features from H&E whole slide images (WSI). These supervised deep learning models require large quantities of images manually annotated with cellular- and tissue-level details by pathologists, which limits scale and generalizability across cancer types and imaging platforms. Here we propose a semi-supervised deep learning framework that automatically annotates biologically relevant image content from hundreds of solid tumor WSI with minimal pathologist intervention, thus improving quality and speed of analytical workflows aimed at deriving clinically relevant features.MethodsThe dataset consisted of >200 H&E images across >10 solid tumor types (e.g. breast, lung, colorectal, cervical, and urothelial cancers) from advanced disease patients. WSI were first partitioned into small tiles of 128μm for feature extraction using a 50-layer convolutional neural network pre-trained on the ImageNet database. Dimensionality reduction and unsupervised clustering were applied to the resultant embeddings and image clusters were identified with enriched histological and morphological characteristics. A random subset of representative tiles (<0.5% of whole slide tissue areas) from these distinct image clusters was manually reviewed by pathologists and assigned to eight histological and morphological categories: tumor, stroma/connective tissue, necrotic cells, lymphocytes, red blood cells, white blood cells, normal tissue and glass/background. This dataset allowed the development of a multi-label deep neural network to segment morphologically distinct regions and detect/quantify histopathological features in WSI.ResultsAs representative image tiles within each image cluster were morphologically similar, expert pathologists were able to assign annotations to multiple images in parallel, effectively at 150 images/hour. Five-fold cross-validation showed average prediction accuracy of 0.93 [0.8–1.0] and area under the curve of 0.90 [0.8–1.0] over the eight image categories. As an extension of this classifier framework, all whole slide H&E images were segmented and composite lymphocyte, stromal, and necrotic content per patient tumor was derived and correlated with estimates by pathologists (p<0.05).ConclusionsA novel and scalable deep learning framework for annotating and learning H&E features from a large unlabeled WSI dataset across tumor types was developed. This automated approach accurately identified distinct histomorphological features, with significantly reduced labeling time and effort required for pathologists. Further, this classifier framework was extended to annotate regions enriched in lymphocytes, stromal, and necrotic cells – important TME contexture with clinical relevance for patient prognosis and treatment decisions.


Author(s):  
Thanh Tran ◽  
Lam Binh Minh ◽  
Suk-Hwan Lee ◽  
Ki-Ryong Kwon

Clinically, knowing the number of red blood cells (RBCs) and white blood cells (WBCs) helps doctors to make the better decision on accurate diagnosis of numerous diseases. The manual cell counting is a very time-consuming and expensive process, and it depends on the experience of specialists. Therefore, a completely automatic method supporting cell counting is a viable solution for clinical laboratories. This paper proposes a novel blood cell counting procedure to address this challenge. The proposed method adopts SegNet - a deep learning semantic segmentation to simultaneously segment RBCs and WBCs. The global accuracy of the segmentation of WBCs, RBCs, and the background of peripheral blood smear images obtains 89% when segment WBCs and RBCs from the background of blood smear images. Moreover, an effective solution to separate grouped or overlapping cells and cell count is presented using Euclidean distance transform, local maxima, and connected component labeling. The counting result of the proposed procedure achieves an accuracy of 93.3% for red blood cell count using dataset 1 and 97.38% for white blood cell count using dataset 2.


Author(s):  
Neerukattu Indrani and Chiraparapu Srinivasa Rao

The microscopic inspection of blood smears provides diagnostic information concerning patients’ health status. For example, the presence of infections, leukemia, and some particular kinds of cancers can be diagnosed based on the results of the classification and the count of white blood cells. The traditional method for the differential blood count is performed by experienced operators. They use a microscope and count the percentage of the occurrence of each type of cell counted within an area of interest in smears. Obviously, this manual counting process is very tedious and slow. In addition, the cell classification and counting accuracy may depend on the capabilities and experiences of the operators. Therefore, the necessity of an automated differential counting system becomes inevitable. In this paper, CNN models are used. In order to achieve good performance from deep learning methods, the network needs to be trained with large amounts of data during the training phase. We take the images of the white blood cells for the training phase and train our model on them. With this method we achieved good accuracy than traditional methods. And we can generate the results within the seconds also.


Author(s):  
Limu Chen ◽  
Ye Xia ◽  
Dexiong Pan ◽  
Chengbin Wang

<p>Deep-learning based navigational object detection is discussed with respect to active monitoring system for anti-collision between vessel and bridge. Motion based object detection method widely used in existing anti-collision monitoring systems is incompetent in dealing with complicated and changeable waterway for its limitations in accuracy, robustness and efficiency. The video surveillance system proposed contains six modules, including image acquisition, detection, tracking, prediction, risk evaluation and decision-making, and the detection module is discussed in detail. A vessel-exclusive dataset with tons of image samples is established for neural network training and a SSD (Single Shot MultiBox Detector) based object detection model with both universality and pertinence is generated attributing to tactics of sample filtering, data augmentation and large-scale optimization, which make it capable of stable and intelligent vessel detection. Comparison results with conventional methods indicate that the proposed deep-learning method shows remarkable advantages in robustness, accuracy, efficiency and intelligence. In-situ test is carried out at Songpu Bridge in Shanghai, and the results illustrate that the method is qualified for long-term monitoring and providing information support for further analysis and decision making.</p>


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 160
Author(s):  
Xuelin Zhang ◽  
Donghao Zhang ◽  
Alexander Leye ◽  
Adrian Scott ◽  
Luke Visser ◽  
...  

This paper focuses on improving the performance of scientific instrumentation that uses glass spray chambers for sample introduction, such as spectrometers, which are widely used in analytical chemistry, by detecting incidents using deep convolutional models. The performance of these instruments can be affected by the quality of the introduction of the sample into the spray chamber. Among the indicators of poor quality sample introduction are two primary incidents: The formation of liquid beads on the surface of the spray chamber, and flooding at the bottom of the spray chamber. Detecting such events autonomously as they occur can assist with improving the overall operational accuracy and efficacy of the chemical analysis, and avoid severe incidents such as malfunction and instrument damage. In contrast to objects commonly seen in the real world, beading and flooding detection are more challenging since they are of significantly small size and transparent. Furthermore, the non-rigid property increases the difficulty of the detection of these incidents, as such that existing deep-learning-based object detection frameworks are prone to fail for this task. There is no former work that uses computer vision to detect these incidents in the chemistry industry. In this work, we propose two frameworks for the detection task of these two incidents, which not only leverage the modern deep learning architectures but also integrate with expert knowledge of the problems. Specifically, the proposed networks first localize the regions of interest where the incidents are most likely generated and then refine these incident outputs. The use of data augmentation and synthesis, and choice of negative sampling in training, allows for a large increase in accuracy while remaining a real-time system for inference. In the data collected from our laboratory, our method surpasses widely used object detection baselines and can correctly detect 95% of the beads and 98% of the flooding. At the same time, out method can process four frames per second and is able to be implemented in real time.


Sign in / Sign up

Export Citation Format

Share Document