scholarly journals Portable Ultrasound Research System for Use in Automated Bladder Monitoring with Machine-Learning-Based Segmentation

Sensors ◽  
2021 ◽  
Vol 21 (19) ◽  
pp. 6481
Author(s):  
Marc Fournelle ◽  
Tobias Grün ◽  
Daniel Speicher ◽  
Steffen Weber ◽  
Mehmet Yilmaz ◽  
...  

We developed a new mobile ultrasound device for long-term and automated bladder monitoring without user interaction consisting of 32 transmit and receive electronics as well as a 32-element phased array 3 MHz transducer. The device architecture is based on data digitization and rapid transfer to a consumer electronics device (e.g., a tablet) for signal reconstruction (e.g., by means of plane wave compounding algorithms) and further image processing. All reconstruction algorithms are implemented in the GPU, allowing real-time reconstruction and imaging. The system and the beamforming algorithms were evaluated with respect to the imaging performance on standard sonographical phantoms (CIRS multipurpose ultrasound phantom) by analyzing the resolution, the SNR and the CNR. Furthermore, ML-based segmentation algorithms were developed and assessed with respect to their ability to reliably segment human bladders with different filling levels. A corresponding CNN was trained with 253 B-mode data sets and 20 B-mode images were evaluated. The quantitative and qualitative results of the bladder segmentation are presented and compared to the ground truth obtained by manual segmentation.

2017 ◽  
Vol 14 (S339) ◽  
pp. 303-306
Author(s):  
J. Girard ◽  
M. Jiang ◽  
J-L. Starck ◽  
S. Corbel

AbstractThe next-generation radio telescopes such as LOFAR and SKA will give access to high time-resolution and high instantaneous sensitivity that can be exploited to study slow and fast transients over the whole radio window. The search for radio transients in large datasets also represents a new signal-processing challenge requiring efficient and robust signal reconstruction algorithms. Using sparse representations and the general ‘compressed sensing’ framework, we developed a 2D–1D algorithm based on the primal-dual splitting method. We have performed our sparse 2D–1D reconstruction on three-dimensional data sets containing either simulated or real radio transients, at various levels of SNR and integration times. This report presents a summary of the current level of performance of our method.


2021 ◽  
Vol 5 (3) ◽  
pp. 83
Author(s):  
Bilgi Görkem Yazgaç ◽  
Mürvet Kırcı

In this paper, we propose a fractional differential equation (FDE)-based approach for the estimation of instantaneous frequencies for windowed signals as a part of signal reconstruction. This approach is based on modeling bandpass filter results around the peaks of a windowed signal as fractional differential equations and linking differ-integrator parameters, thereby determining the long-range dependence on estimated instantaneous frequencies. We investigated the performance of the proposed approach with two evaluation measures and compared it to a benchmark noniterative signal reconstruction method (SPSI). The comparison was provided with different overlap parameters to investigate the performance of the proposed model concerning resolution. An additional comparison was provided by applying the proposed method and benchmark method outputs to iterative signal reconstruction algorithms. The proposed FDE method received better evaluation results in high resolution for the noniterative case and comparable results with SPSI with an increasing iteration number of iterative methods, regardless of the overlap parameter.


2021 ◽  
Vol 7 (2) ◽  
pp. 21
Author(s):  
Roland Perko ◽  
Manfred Klopschitz ◽  
Alexander Almer ◽  
Peter M. Roth

Many scientific studies deal with person counting and density estimation from single images. Recently, convolutional neural networks (CNNs) have been applied for these tasks. Even though often better results are reported, it is often not clear where the improvements are resulting from, and if the proposed approaches would generalize. Thus, the main goal of this paper was to identify the critical aspects of these tasks and to show how these limit state-of-the-art approaches. Based on these findings, we show how to mitigate these limitations. To this end, we implemented a CNN-based baseline approach, which we extended to deal with identified problems. These include the discovery of bias in the reference data sets, ambiguity in ground truth generation, and mismatching of evaluation metrics w.r.t. the training loss function. The experimental results show that our modifications allow for significantly outperforming the baseline in terms of the accuracy of person counts and density estimation. In this way, we get a deeper understanding of CNN-based person density estimation beyond the network architecture. Furthermore, our insights would allow to advance the field of person density estimation in general by highlighting current limitations in the evaluation protocols.


Algorithms ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 212
Author(s):  
Youssef Skandarani ◽  
Pierre-Marc Jodoin ◽  
Alain Lalande

Deep learning methods are the de facto solutions to a multitude of medical image analysis tasks. Cardiac MRI segmentation is one such application, which, like many others, requires a large number of annotated data so that a trained network can generalize well. Unfortunately, the process of having a large number of manually curated images by medical experts is both slow and utterly expensive. In this paper, we set out to explore whether expert knowledge is a strict requirement for the creation of annotated data sets on which machine learning can successfully be trained. To do so, we gauged the performance of three segmentation models, namely U-Net, Attention U-Net, and ENet, trained with different loss functions on expert and non-expert ground truth for cardiac cine–MRI segmentation. Evaluation was done with classic segmentation metrics (Dice index and Hausdorff distance) as well as clinical measurements, such as the ventricular ejection fractions and the myocardial mass. The results reveal that generalization performances of a segmentation neural network trained on non-expert ground truth data is, to all practical purposes, as good as that trained on expert ground truth data, particularly when the non-expert receives a decent level of training, highlighting an opportunity for the efficient and cost-effective creation of annotations for cardiac data sets.


2011 ◽  
Vol 16 (9) ◽  
pp. 1059-1067 ◽  
Author(s):  
Peter Horvath ◽  
Thomas Wild ◽  
Ulrike Kutay ◽  
Gabor Csucs

Imaging-based high-content screens often rely on single cell-based evaluation of phenotypes in large data sets of microscopic images. Traditionally, these screens are analyzed by extracting a few image-related parameters and use their ratios (linear single or multiparametric separation) to classify the cells into various phenotypic classes. In this study, the authors show how machine learning–based classification of individual cells outperforms those classical ratio-based techniques. Using fluorescent intensity and morphological and texture features, they evaluated how the performance of data analysis increases with increasing feature numbers. Their findings are based on a case study involving an siRNA screen monitoring nucleoplasmic and nucleolar accumulation of a fluorescently tagged reporter protein. For the analysis, they developed a complete analysis workflow incorporating image segmentation, feature extraction, cell classification, hit detection, and visualization of the results. For the classification task, the authors have established a new graphical framework, the Advanced Cell Classifier, which provides a very accurate high-content screen analysis with minimal user interaction, offering access to a variety of advanced machine learning methods.


2020 ◽  
Author(s):  
Tobias Rubel ◽  
Anna Ritz

AbstractSignaling pathways drive cellular response, and understanding such pathways is fundamental to molecular systems biology. A mounting volume of experimental protein interaction data has motivated the development of algorithms to computationally reconstruct signaling pathways. However, existing methods suffer from low recall in recovering protein interactions in ground truth pathways, limiting our confidence in any new predictions for experimental validation. We present the Pathway Reconstruction AUGmenter (PRAUG), a higher-order function for producing high-quality pathway reconstruction algorithms. PRAUG modifies any existing pathway reconstruction method, resulting in augmented algorithms that outperform their un-augmented counterparts for six different algorithms across twenty-nine diverse signaling pathways. The algorithms produced by PRAUG collectively reveal potential new proteins and interactions involved in the Wnt and Notch signaling pathways. PRAUG offers a valuable framework for signaling pathway prediction and discovery.


2019 ◽  
Author(s):  
Hesam Mazidi ◽  
Tianben Ding ◽  
Arye Nehorai ◽  
Matthew D. Lew

The resolution and accuracy of single-molecule localization micro-scopes (SMLMs) are routinely benchmarked using simulated data, calibration “rulers,” or comparisons to secondary imaging modalities. However, these methods cannot quantify the nanoscale accuracy of an arbitrary SMLM dataset. Here, we show that by computing localization stability under a well-chosen perturbation with accurate knowledge of the imaging system, we can robustly measure the confidence of individual localizations without ground-truth knowledge of the sample. We demonstrate that our method, termed Wasserstein-induced flux (WIF), measures the accuracy of various reconstruction algorithms directly on experimental 2D and 3D data of microtubules and amyloid fibrils. We further show that WIF confidences can be used to evaluate the mismatch between computational models and imaging data, enhance the accuracy and resolution of recon-structed structures, and discover hidden molecular heterogeneities. As a computational methodology, WIF is broadly applicable to any SMLM dataset, imaging system, and localization algorithm.


2012 ◽  
Vol 4 (4) ◽  
pp. 15-30 ◽  
Author(s):  
John Haggerty ◽  
Mark C. Casson ◽  
Sheryllynne Haggerty ◽  
Mark J. Taylor

The increasing use of social media, applications or platforms that allow users to interact online, ensures that this environment will provide a useful source of evidence for the forensics examiner. Current tools for the examination of digital evidence find this data problematic as they are not designed for the collection and analysis of online data. Therefore, this paper presents a framework for the forensic analysis of user interaction with social media. In particular, it presents an inter-disciplinary approach for the quantitative analysis of user engagement to identify relational and temporal dimensions of evidence relevant to an investigation. This framework enables the analysis of large data sets from which a (much smaller) group of individuals of interest can be identified. In this way, it may be used to support the identification of individuals who might be ‘instigators’ of a criminal event orchestrated via social media, or a means of potentially identifying those who might be involved in the ‘peaks’ of activity. In order to demonstrate the applicability of the framework, this paper applies it to a case study of actors posting to a social media Web site.


Sign in / Sign up

Export Citation Format

Share Document