scholarly journals A Smartphone-Based Cell Segmentation to Support Nasal Cytology

2020 ◽  
Vol 10 (13) ◽  
pp. 4567 ◽  
Author(s):  
Giovanni Dimauro ◽  
Davide Di Pierro ◽  
Francesca Deperte ◽  
Lorenzo Simone ◽  
Pio Raffaele Fina

Rhinology studies the anatomy, physiology, and diseases affecting the nasal region—one of the most modern techniques to diagnose these diseases is nasal cytology, which involves microscopic analysis of the cells contained in the nasal mucosa. The standard clinical protocol regulates the compilation of the rhino-cytogram by observing, for each slide, at least 50 fields under an optical microscope to evaluate the cell population and search for cells important for diagnosis. The time and effort required for the specialist to analyze a slide are significant. In this paper, we present a smartphones-based system to support cell segmentation on images acquired directly from the microscope. Then, the specialist can analyze the cells and the other elements extracted directly or, alternatively, he can send them to Rhino-cyt, a server system recently presented in the literature, that also performs the automatic cell classification, giving back the final rhinocytogram. This way he significantly reduces the time for diagnosing. The system crops cells with sensitivity = 0.96, which is satisfactory because it shows that cells are not overlooked as false negatives are few, and therefore largely sufficient to support the specialist effectively. The use of traditional image processing techniques to preprocess the images also makes the process sustainable from the computational point of view for medium–low end architectures and is battery-efficient on a mobile phone.

2018 ◽  
Vol 8 (12) ◽  
pp. 2569 ◽  
Author(s):  
David Luengo ◽  
David Meltzer ◽  
Tom Trigano

The electrocardiogram (ECG) was the first biomedical signal for which digital signal processing techniques were extensively applied. By its own nature, the ECG is typically a sparse signal, composed of regular activations (QRS complexes and other waveforms, such as the P and T waves) and periods of inactivity (corresponding to isoelectric intervals, such as the PQ or ST segments), plus noise and interferences. In this work, we describe an efficient method to construct an overcomplete and multi-scale dictionary for sparse ECG representation using waveforms recorded from real-world patients. Unlike most existing methods (which require multiple alternative iterations of the dictionary learning and sparse representation stages), the proposed approach learns the dictionary first, and then applies a fast sparse inference algorithm to model the signal using the constructed dictionary. As a result, our method is much more efficient from a computational point of view than other existing algorithms, thus becoming amenable to dealing with long recordings from multiple patients. Regarding the dictionary construction, we located first all the QRS complexes in the training database, then we computed a single average waveform per patient, and finally we selected the most representative waveforms (using a correlation-based approach) as the basic atoms that were resampled to construct the multi-scale dictionary. Simulations on real-world records from Physionet’s PTB database show the good performance of the proposed approach.


Author(s):  
David Luengo ◽  
David Meltzer ◽  
Tom Trigano

The electrocardiogram (ECG) was the first biomedical signal where digital signal processing techniques were extensively applied. By its own nature, the ECG is typically a sparse signal, composed of regular activations (the QRS complexes and other waveforms, like the P and T waves) and periods of inactivity (corresponding to isoelectric intervals, like the PQ or ST segments), plus noise and interferences. In this work, we describe an efficient method to construct an overcomplete and multi-scale dictionary for sparse ECG representation using waveforms recorded from real-world patients. Unlike most existing methods (which require multiple alternative iterations of the dictionary learning and sparse representation stages), the proposed approach learns the dictionary first, and then applies an efficient sparse inference algorithm to model the signal using the learnt dictionary. As a result, our method is much more efficient from a computational point of view than other existing methods, thus becoming amenable to deal with long recordings from multiple patients. Regarding the dictionary construction, we locate first all the QRS complexes in the training database, then we compute a single average waveform per patient, and finally we select the most representative waveforms (using a correlation-based approach) as the basic atoms that will be resampled to construct the multi-scale dictionary. Simulations on real-world records from Physionet's PTB database show the good performance of the proposed approach.


2020 ◽  
Vol 7 (2) ◽  
pp. 34-41
Author(s):  
VLADIMIR NIKONOV ◽  
◽  
ANTON ZOBOV ◽  

The construction and selection of a suitable bijective function, that is, substitution, is now becoming an important applied task, particularly for building block encryption systems. Many articles have suggested using different approaches to determining the quality of substitution, but most of them are highly computationally complex. The solution of this problem will significantly expand the range of methods for constructing and analyzing scheme in information protection systems. The purpose of research is to find easily measurable characteristics of substitutions, allowing to evaluate their quality, and also measures of the proximity of a particular substitutions to a random one, or its distance from it. For this purpose, several characteristics were proposed in this work: difference and polynomial, and their mathematical expectation was found, as well as variance for the difference characteristic. This allows us to make a conclusion about its quality by comparing the result of calculating the characteristic for a particular substitution with the calculated mathematical expectation. From a computational point of view, the thesises of the article are of exceptional interest due to the simplicity of the algorithm for quantifying the quality of bijective function substitutions. By its nature, the operation of calculating the difference characteristic carries out a simple summation of integer terms in a fixed and small range. Such an operation, both in the modern and in the prospective element base, is embedded in the logic of a wide range of functional elements, especially when implementing computational actions in the optical range, or on other carriers related to the field of nanotechnology.


2019 ◽  
Vol 27 (3) ◽  
pp. 317-340 ◽  
Author(s):  
Max Kontak ◽  
Volker Michel

Abstract In this work, we present the so-called Regularized Weak Functional Matching Pursuit (RWFMP) algorithm, which is a weak greedy algorithm for linear ill-posed inverse problems. In comparison to the Regularized Functional Matching Pursuit (RFMP), on which it is based, the RWFMP possesses an improved theoretical analysis including the guaranteed existence of the iterates, the convergence of the algorithm for inverse problems in infinite-dimensional Hilbert spaces, and a convergence rate, which is also valid for the particular case of the RFMP. Another improvement is the cancellation of the previously required and difficult to verify semi-frame condition. Furthermore, we provide an a-priori parameter choice rule for the RWFMP, which yields a convergent regularization. Finally, we will give a numerical example, which shows that the “weak” approach is also beneficial from the computational point of view. By applying an improved search strategy in the algorithm, which is motivated by the weak approach, we can save up to 90  of computation time in comparison to the RFMP, whereas the accuracy of the solution does not change as much.


Author(s):  
Federico Perini ◽  
Anand Krishnasamy ◽  
Youngchul Ra ◽  
Rolf D. Reitz

The need for more efficient and environmentally sustainable internal combustion engines is driving research towards the need to consider more realistic models for both fuel physics and chemistry. As far as compression ignition engines are concerned, phenomenological or lumped fuel models are unreliable to capture spray and combustion strategies outside of their validation domains — typically, high-pressure injection and high-temperature combustion. Furthermore, the development of variable-reactivity combustion strategies also creates the need to model comprehensively different hydrocarbon families even in single fuel surrogates. From the computational point of view, challenges to achieving practical simulation times arise from the dimensions of the reaction mechanism, that can be of hundreds species even if hydrocarbon families are lumped into representative compounds, and thus modeled with non-elementary, skeletal reaction pathways. In this case, it is also impossible to pursue further mechanism reductions to lower dimensions. CPU times for integrating chemical kinetics in internal combustion engine simulations ultimately scale with the number of cells in the grid, and with the cube number of species in the reaction mechanism. In the present work, two approaches to reduce the demands of engine simulations with detailed chemistry are presented. The first one addresses the demands due to the solution of the chemistry ODE system, and features the adoption of SpeedCHEM, a newly developed chemistry package that solves chemical kinetics using sparse analytical Jacobians. The second one aims to reduce the number of chemistry calculations by binning the CFD cells of the engine grid into a subset of clusters, where chemistry is solved and then mapped back to the original domain. In particular, a high-dimensional representation of the chemical state space is adopted for keeping track of the different fuel components, and a newly developed bounding-box-constrained k-means algorithm is used to subdivide the cells into reactively homogeneous clusters. The approaches have been tested on a number of simulations featuring multi-component diesel fuel surrogates, and different engine grids. The results show that significant CPU time reductions, of about one order of magnitude, can be achieved without loss of accuracy in both engine performance and emissions predictions, prompting for their applicability to more refined or full-sized engine grids.


Author(s):  
Virdiansyah Permana ◽  
Rahmat Shoureshi

This study presents a new approach to determine the controllability and observability of a large scale nonlinear dynamic thermal system using graph-theory. The novelty of this method is in adapting graph theory for nonlinear class and establishing a graphic condition that describes the necessary and sufficient terms for a nonlinear class system to be controllable and observable, which equivalents to the analytical method of Lie algebra rank condition. The directed graph (digraph) is utilized to model the system, and the rule of its adaptation in nonlinear class is defined. Subsequently, necessary and sufficient terms to achieve controllability and observability condition are investigated through the structural property of a digraph called connectability. It will be shown that the connectability condition between input and states, as well as output and states of a nonlinear system are equivalent to Lie-algebra rank condition (LARC). This approach has been proven to be easier from a computational point of view and is thus found to be useful when dealing with a large system.


2016 ◽  
Vol 2016 ◽  
pp. 1-8 ◽  
Author(s):  
Mochamad Arief Budihardjo

Morphological variations of geosynthetic clay liner (GCL) samples, hydrated with two different permeates, distilled water and NaCl solution (100 mM concentration), were observed in detail using microscopic analysis. After the GCL samples were hydrated with the NaCl solution, they were observed with an optical microscope. While the surface of the treated GCL samples was similar to the surface of the untreated GCL, a crystal deposit was found on the surface of the treated samples. Using a scanning electron microscope (SEM), a more solid appearance was observed for the bentonite particles contained in the GCL after the sample was hydrated with distilled water in comparison to the GCL sample that was hydrated with the NaCl solution. It appears that salt solution hydration results in less swelling of the bentonite particles. Furthermore, the energy-dispersive X-ray spectrometer (EDS) results showed that distilled water hydration had no effect on the distribution of the elements contained in the GCL samples. However, bound chlorine was observed, which demonstrated that the bentonite particles had absorbed the NaCl solution. In addition, changes in the hydraulic conductivity of the hydrated GCL samples were also observed.


2018 ◽  
Vol 115 (32) ◽  
pp. E7615-E7623 ◽  
Author(s):  
Florencia Garrido-Charad ◽  
Tomas Vega-Zuniga ◽  
Cristián Gutiérrez-Ibáñez ◽  
Pedro Fernandez ◽  
Luciana López-Jury ◽  
...  

The optic tectum (TeO), or superior colliculus, is a multisensory midbrain center that organizes spatially orienting responses to relevant stimuli. To define the stimulus with the highest priority at each moment, a network of reciprocal connections between the TeO and the isthmi promotes competition between concurrent tectal inputs. In the avian midbrain, the neurons mediating enhancement and suppression of tectal inputs are located in separate isthmic nuclei, facilitating the analysis of the neural processes that mediate competition. A specific subset of radial neurons in the intermediate tectal layers relay retinal inputs to the isthmi, but at present it is unclear whether separate neurons innervate individual nuclei or a single neural type sends a common input to several of them. In this study, we used in vitro neural tracing and cell-filling experiments in chickens to show that single neurons innervate, via axon collaterals, the three nuclei that comprise the isthmotectal network. This demonstrates that the input signals representing the strength of the incoming stimuli are simultaneously relayed to the mechanisms promoting both enhancement and suppression of the input signals. By performing in vivo recordings in anesthetized chicks, we also show that this common input generates synchrony between both antagonistic mechanisms, demonstrating that activity enhancement and suppression are closely coordinated. From a computational point of view, these results suggest that these tectal neurons constitute integrative nodes that combine inputs from different sources to drive in parallel several concurrent neural processes, each performing complementary functions within the network through different firing patterns and connectivity.


T-Comm ◽  
2020 ◽  
Vol 14 (12) ◽  
pp. 45-50
Author(s):  
Mikhail E. Sukhoparov ◽  
◽  
Ilya S. Lebedev ◽  

The development of IoT concept makes it necessary to search and improve models and methods for analyzing the state of remote autonomous devices. Due to the fact that some devices are located outside the controlled area, it becomes necessary to develop universal models and methods for identifying the state of low-power devices from a computational point of view, using complex approaches to analyzing data coming from various information channels. The article discusses an approach to identifying IoT devices state, based on parallel functioning classifiers that process time series received from elements in various states and modes of operation. The aim of the work is to develop an approach for identifying the state of IoT devices based on time series recorded during the execution of various processes. The proposed solution is based on methods of parallel classification and statistical analysis, requires an initial labeled sample. The use of several classifiers that give an answer "independently" from each other makes it possible to average the error by "collective" voting. The developed approach is tested on a sequence of classifying algorithms, to the input of which the time series obtained experimentally under various operating conditions were fed. Results are presented for a naive Bayesian classifier, decision trees, discriminant analysis, and the k nearest neighbors method. The use of a sequence of classification algorithms operating in parallel allows scaling by adding new classifiers without losing processing speed. The method makes it possible to identify the state of the Internet of Things device with relatively small requirements for computing resources, ease of implementation, and scalability by adding new classifying algorithms.


Sign in / Sign up

Export Citation Format

Share Document