scholarly journals ИССЛЕДОВАНИЕ ЭФФЕКТИВНОСТИ МЕТОДОВ ПОСТ-КЛАССИФИКАЦИОННОЙ ОБРАБОТКИ ЗАШУМЛЕННЫХ МНОГОКАНАЛЬНЫХ ИЗОБРАЖЕНИЙ

Author(s):  
Ирина Карловна Васильева ◽  
Владимир Васильевич Лукин

The subject matter of the article are the methods of local spatial post-processing of images obtained as a result of statistical per-pixel classification of multichannel satellite images distorted by additive Gaussian noise. The aim is to investigate the effectiveness of some variants of post-classification image processing methods over a wide range of signal-to-noise ratio; as a criterion of effectiveness, observed objects classification reliability indicators have been taken. The tasks to be solved are: to generate random values of the noise components brightness, ensuring that they coincide with the adopted probabilistic model; to implement a procedure of statistical controlled classification by the maximum likelihood method for images distorted by noise; to evaluate the results of the objects selection in noisy images by the criterion of the empirical probability of correct recognition; to implement procedures for local object-oriented post-processing of images; to investigate the effect of noise variance on the effectiveness of post-processing procedures. The methods used are: methods of stochastic simulation, methods of approximation of empirical dependencies, statistical methods of recognition, methods of probability theory and mathematical statistics, methods of local spatial filtering. The following results have been obtained. Algorithms of rank and weighted median post-processing with considering the results of classification by k-nearest neighbors in the filter window were implemented. The developed algorithms efficiency analysis that based on estimates of the correct recognition probability for objects on noisy images was carried out. Empirical dependences of the estimates of the overall recognition errors probability versus the additive noise variance were obtained. Conclusions. The scientific novelty of the results obtained is as follows: combined approaches to building decision rules, taking into account destabilizing factors, have been further developed – it has been shown that the use of methods of local object-oriented filtering of segmented images reduces the number of point errors in the element-based classification of objects, as well as partially restores the connectedness and spatial distribution of image structure elements.

Author(s):  
Ирина Карловна Васильева ◽  
Владимир Васильевич Лукин

The subject matter of the article is the methods of morphological spatial filtering of images in pseudo-colors obtained as a result of statistical segmentation of multichannel satellite images. The aim is to study the effectiveness of various methods of post-classification image processing in order to increase the probability of correct recognition for observed objects. The tasks to be solved are: to select a mathematical model describing the training sets of objects’ classes; to implement the procedure of statistical controlled classification by the maximum likelihood method; to evaluate the results of objects’ recognition on the test image by the criterion of the empirical probability of correct recognition; to formalize the procedures of local object-oriented filtering of a segmented image; to investigate the effectiveness of rank filtering as well as weighted median filtering procedures taking into account the results of the classification by k-nearest neighbors in the filter window. The methods used are methods of empirical distributions’ approximation, statistical recognition methods, methods of probability theory and mathematical statistics, methods of local spatial filtering. The following results were obtained. A method for synthesizing a universal mathematical model has been proposed for describing non-Gaussian signal characteristics of objects on multichannel images based on a multi-dimensional variant of Johnson SB distribution; this model was used for statistical pixel-by-pixel classification of the original satellite image. Algorithms for local post-classification processing in the neighborhood of the selected segments boundaries have been implemented. The analysis of the developed algorithms’ effectiveness based on estimates of classes’ correct recognition probability is performed. Conclusions. The scientific novelty of the results obtained is as follows: combined approaches to the pattern recognition procedures have been further developed – it has been shown that the use of methods of local object-oriented filtering of segmented images allows to reduce the number of point errors for element-wise classification of spatial objects.


1987 ◽  
Vol 16 (231) ◽  
Author(s):  
Bent Bruun Kristensen ◽  
Ole Lehrmann Madsen ◽  
Birger Møller-Pedersen ◽  
Kristen Nygaard

<p>The main thing with the sub-class mechanism as found in languages like C++, SIMULA and Smalltalk is its possibility to express <em>specializations</em>. A general class, covering a wide range of objects, may be specialized to cover more specific objects. This is obtained by three properties of sub-classing: An object of a sub-class inherits the attributes of the super-class, virtual procedure/method attributes (of the super-class) may be specialized in the sub-class, and (in SIMULA only) it inherits the actions of the super-class.</p><p>In the languages mentioned above, virtual procedures/methods of a super-class are specialized in sub-classes in a very primitive manner: they are simply <em>re-defined</em> and need not bear any resemblance of the virtual in the super-class. In BETA, a new object-oriented language, classes and methods are unified into one concept, and by an extension of the virtual concept, virtual procedures/methods in sub-classes are defined as <em>specializations of the virtuals</em> in the super-class. The virtual procedures/methods of the sub-classes thus inherit the attributes (e.g. parameters) and actions from the ''super-procedure/method''.</p><p>In the languages mentioned above only procedures/methods may be virtual. As classes and procedures/methods are unified in BETA this gives also <em>virtual classes</em>. The paper demonstrates, how this may be used to parameterize types and enforce constraints on types.</p>


Author(s):  
Vladimir Lukin ◽  
Galina Proskura ◽  
Irina Vasilieva

The subject of this study is the pixel-by-pixel controlled classification of multichannel satellite images distorted by additive white Gaussian noise. The paper aim is to study the effectiveness of various methods of image classification in a wide range of signal-to-noise ratios; an F-measure is used as a criterion for recognition efficiency. It is a harmonic mean of accuracy and completeness: accuracy shows how much of the objects identified by the classifier as positive are positive; completeness shows how much of the positive objects were allocated by the classifier. Tasks: generate random valuesof the brightness of the noise components, ensuring their compliance with the accepted probabilistic model; implement the procedures of element-wise controlled classification according to the methods of support vectors, logistic regression, neural network based on a multilayer perceptron for images distorted by noise; evaluate and analyze the results of objects bezel-wise classification of noisy images; investigate the effect of noise variance on classification performance. The following results are obtained. Algorithms of pixel-by-pixel controlled classification are implemented. A comparative analysis of classification efficiency in noisy images is performed. Conclusions are drawn. It is shown that all classifiers provide the best results for classes that mainly correspond to areal objects (Water, Grass) while heterogeneous objects (Urban and, especially, Bushes) are recognized in the worst way; classifiers based on the support vector machine and logistic regression show low recognition accuracy of extended objects, such as a narrow river (that belongs to the wide class of "water"). The presence of noise in the image leads to a significant increase in the number of recognition errors, which mainly appear as isolated points on the selected segments, that is, incorrectly classified pixels. In this case, the best value of the classification quality indicator is achieved using neural networks based on a multilayer perceptron.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ibtissame Khaoua ◽  
Guillaume Graciani ◽  
Andrey Kim ◽  
François Amblard

AbstractFor a wide range of purposes, one faces the challenge to detect light from extremely faint and spatially extended sources. In such cases, detector noises dominate over the photon noise of the source, and quantum detectors in photon counting mode are generally the best option. Here, we combine a statistical model with an in-depth analysis of detector noises and calibration experiments, and we show that visible light can be detected with an electron-multiplying charge-coupled devices (EM-CCD) with a signal-to-noise ratio (SNR) of 3 for fluxes less than $$30\,{\text{photon}}\,{\text{s}}^{ - 1} \,{\text{cm}}^{ - 2}$$ 30 photon s - 1 cm - 2 . For green photons, this corresponds to 12 aW $${\text{cm}}^{ - 2}$$ cm - 2 ≈ $$9{ } \times 10^{ - 11}$$ 9 × 10 - 11 lux, i.e. 15 orders of magnitude less than typical daylight. The strong nonlinearity of the SNR with the sampling time leads to a dynamic range of detection of 4 orders of magnitude. To detect possibly varying light fluxes, we operate in conditions of maximal detectivity $${\mathcal{D}}$$ D rather than maximal SNR. Given the quantum efficiency $$QE\left( \lambda \right)$$ Q E λ of the detector, we find $${ \mathcal{D}} = 0.015\,{\text{photon}}^{ - 1} \,{\text{s}}^{1/2} \,{\text{cm}}$$ D = 0.015 photon - 1 s 1 / 2 cm , and a non-negligible sensitivity to blackbody radiation for T > 50 °C. This work should help design highly sensitive luminescence detection methods and develop experiments to explore dynamic phenomena involving ultra-weak luminescence in biology, chemistry, and material sciences.


2021 ◽  
Vol 17 (1-2) ◽  
pp. 3-14
Author(s):  
Stathis C. Stiros ◽  
F. Moschas ◽  
P. Triantafyllidis

GNSS technology (known especially for GPS satellites) for measurement of deflections has proved very efficient and useful in bridge structural monitoring, even for short stiff bridges, especially after the advent of 100 Hz GNSS sensors. Mode computation from dynamic deflections has been proposed as one of the applications of this technology. Apart from formal modal analyses with GNSS input, and from spectral analysis of controlled free attenuating oscillations, it has been argued that simple spectra of deflections can define more than one modal frequencies. To test this scenario, we analyzed 21 controlled excitation events from a certain bridge monitoring survey, focusing on lateral and vertical deflections, recorded both by GNSS and an accelerometer. These events contain a transient and a following oscillation, and they are preceded and followed by intervals of quiescence and ambient vibrations. Spectra for each event, for the lateral and the vertical axis of the bridge, and for and each instrument (GNSS, accelerometer) were computed, normalized to their maximum value, and printed one over the other, in order to produce a single composite spectrum for each of the four sets. In these four sets, there was also marked the true value of modal frequency, derived from free attenuating oscillations. It was found that for high SNR (signal-to-noise ratio) deflections, spectral peaks in both acceleration and displacement spectra differ by up to 0.3 Hz from the true value. For low SNR, defections spectra do not match the true frequency, but acceleration spectra provide a low-precision estimate of the true frequency. This is because various excitation effects (traffic, wind etc.) contribute with numerous peaks in a wide range of frequencies. Reliable estimates of modal frequencies can hence be derived from deflections spectra only if excitation frequencies (mostly traffic and wind) can be filtered along with most measurement noise, on the basis of additional data.


2021 ◽  
pp. 104973232199379
Author(s):  
Olaug S. Lian ◽  
Sarah Nettleton ◽  
Åge Wifstad ◽  
Christopher Dowrick

In this article, we qualitatively explore the manner and style in which medical encounters between patients and general practitioners (GPs) are mutually conducted, as exhibited in situ in 10 consultations sourced from the One in a Million: Primary Care Consultations Archive in England. Our main objectives are to identify interactional modes, to develop a classification of these modes, and to uncover how modes emerge and shift both within and between consultations. Deploying an interactional perspective and a thematic and narrative analysis of consultation transcripts, we identified five distinctive interactional modes: question and answer (Q&A) mode, lecture mode, probabilistic mode, competition mode, and narrative mode. Most modes are GP-led. Mode shifts within consultations generally map on to the chronology of the medical encounter. Patient-led narrative modes are initiated by patients themselves, which demonstrates agency. Our classification of modes derives from complete naturally occurring consultations, covering a wide range of symptoms, and may have general applicability.


Computers ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 82
Author(s):  
Ahmad O. Aseeri

Deep Learning-based methods have emerged to be one of the most effective and practical solutions in a wide range of medical problems, including the diagnosis of cardiac arrhythmias. A critical step to a precocious diagnosis in many heart dysfunctions diseases starts with the accurate detection and classification of cardiac arrhythmias, which can be achieved via electrocardiograms (ECGs). Motivated by the desire to enhance conventional clinical methods in diagnosing cardiac arrhythmias, we introduce an uncertainty-aware deep learning-based predictive model design for accurate large-scale classification of cardiac arrhythmias successfully trained and evaluated using three benchmark medical datasets. In addition, considering that the quantification of uncertainty estimates is vital for clinical decision-making, our method incorporates a probabilistic approach to capture the model’s uncertainty using a Bayesian-based approximation method without introducing additional parameters or significant changes to the network’s architecture. Although many arrhythmias classification solutions with various ECG feature engineering techniques have been reported in the literature, the introduced AI-based probabilistic-enabled method in this paper outperforms the results of existing methods in outstanding multiclass classification results that manifest F1 scores of 98.62% and 96.73% with (MIT-BIH) dataset of 20 annotations, and 99.23% and 96.94% with (INCART) dataset of eight annotations, and 97.25% and 96.73% with (BIDMC) dataset of six annotations, for the deep ensemble and probabilistic mode, respectively. We demonstrate our method’s high-performing and statistical reliability results in numerical experiments on the language modeling using the gating mechanism of Recurrent Neural Networks.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sakthi Kumar Arul Prakash ◽  
Conrad Tucker

AbstractThis work investigates the ability to classify misinformation in online social media networks in a manner that avoids the need for ground truth labels. Rather than approach the classification problem as a task for humans or machine learning algorithms, this work leverages user–user and user–media (i.e.,media likes) interactions to infer the type of information (fake vs. authentic) being spread, without needing to know the actual details of the information itself. To study the inception and evolution of user–user and user–media interactions over time, we create an experimental platform that mimics the functionality of real-world social media networks. We develop a graphical model that considers the evolution of this network topology to model the uncertainty (entropy) propagation when fake and authentic media disseminates across the network. The creation of a real-world social media network enables a wide range of hypotheses to be tested pertaining to users, their interactions with other users, and with media content. The discovery that the entropy of user–user and user–media interactions approximate fake and authentic media likes, enables us to classify fake media in an unsupervised learning manner.


2021 ◽  
Vol 20 (7) ◽  
pp. 911-927
Author(s):  
Lucia Muggia ◽  
Yu Quan ◽  
Cécile Gueidan ◽  
Abdullah M. S. Al-Hatmi ◽  
Martin Grube ◽  
...  

AbstractLichen thalli provide a long-lived and stable habitat for colonization by a wide range of microorganisms. Increased interest in these lichen-associated microbial communities has revealed an impressive diversity of fungi, including several novel lineages which still await formal taxonomic recognition. Among these, members of the Eurotiomycetes and Dothideomycetes usually occur asymptomatically in the lichen thalli, even if they share ancestry with fungi that may be parasitic on their host. Mycelia of the isolates are characterized by melanized cell walls and the fungi display exclusively asexual propagation. Their taxonomic placement requires, therefore, the use of DNA sequence data. Here, we consider recently published sequence data from lichen-associated fungi and characterize and formally describe two new, individually monophyletic lineages at family, genus, and species levels. The Pleostigmataceae fam. nov. and Melanina gen. nov. both comprise rock-inhabiting fungi that associate with epilithic, crust-forming lichens in subalpine habitats. The phylogenetic placement and the monophyly of Pleostigmataceae lack statistical support, but the family was resolved as sister to the order Verrucariales. This family comprises the species Pleostigma alpinum sp. nov., P. frigidum sp. nov., P. jungermannicola, and P. lichenophilum sp. nov. The placement of the genus Melanina is supported as a lineage within the Chaetothyriales. To date, this genus comprises the single species M. gunde-cimermaniae sp. nov. and forms a sister group to a large lineage including Herpotrichiellaceae, Chaetothyriaceae, Cyphellophoraceae, and Trichomeriaceae. The new phylogenetic analysis of the subclass Chaetothyiomycetidae provides new insight into genus and family level delimitation and classification of this ecologically diverse group of fungi.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ata Chizari ◽  
Mirjam J. Schaap ◽  
Tom Knop ◽  
Yoeri E. Boink ◽  
Marieke M. B. Seyger ◽  
...  

AbstractEnabling handheld perfusion imaging would drastically improve the feasibility of perfusion imaging in clinical practice. Therefore, we examine the performance of handheld laser speckle contrast imaging (LSCI) measurements compared to mounted measurements, demonstrated in psoriatic skin. A pipeline is introduced to process, analyze and compare data of 11 measurement pairs (mounted-handheld LSCI modes) operated on 5 patients and various skin locations. The on-surface speeds (i.e. speed of light beam movements on the surface) are quantified employing mean separation (MS) segmentation and enhanced correlation coefficient maximization (ECC). The average on-surface speeds are found to be 8.5 times greater in handheld mode compared to mounted mode. Frame alignment sharpens temporally averaged perfusion maps, especially in the handheld case. The results show that after proper post-processing, the handheld measurements are in agreement with the corresponding mounted measurements on a visual basis. The absolute movement-induced difference between mounted-handheld pairs after the background correction is $$16.4\pm 9.3~\%$$ 16.4 ± 9.3 % (mean ± std, $$n=11$$ n = 11 ), with an absolute median difference of $$23.8\%$$ 23.8 % . Realization of handheld LSCI facilitates measurements on a wide range of skin areas bringing more convenience for both patients and medical staff.


Sign in / Sign up

Export Citation Format

Share Document