Pathology image segmentation is an essential step in early detection and diagnosis for various diseases. Due to its complex nature, precise segmentation is not a trivial task. Recently, deep learning has been proved as an effective option for pathology image processing. However, its efficiency is highly restricted by inconsistent annotation quality. In this article, we propose an accurate and noise-tolerant segmentation approach to overcome the aforementioned issues. This approach consists of two main parts: a preprocessing module for data augmentation and a new neural network architecture, ANT-UNet. Experimental results demonstrate that, even on a noisy dataset, the proposed approach can achieve more accurate segmentation with 6% to 35% accuracy improvement versus other commonly used segmentation methods. In addition, the proposed architecture is hardware friendly, which can reduce the amount of parameters to one-tenth of the original and achieve 1.7× speed-up.
Under the impact of internet populism, internet violence, and other noises on the internet, medical elites, who have a professional background, did not intend to share their opinions on the internet. Thus, misinformation about health is increasingly prevalent. We roughly divided the users in social networks into ordinary users, medical elites, and super-influencers. In this paper, we propose a communication model of health information based on the improved Hegselmann-Krause (H-K) model. By conducting MATLAB-based simulation, the experimental results showed that network noise was an important factor that interfered with opinion propagation regarding health. The louder the noise is, the harder it is for health opinions within a group to reach a consensus. But even in a noisy environment, super-influencers could influence the overall cognition on public health in the social network fundamentally. When the super-influencers held positive opinions in public health, the medical elite keeping silent had a noise-tolerant effect on opinion communication in public health, and vice versa. Thus, three factors concerning noise control, the free information release of medical elites, and the positive position of super-influence are very important to form a virtuous information environment for public health.
Retinal vessel segmentation benefits significantly from deep learning. Its performance relies on sufficient training images with accurate ground-truth segmentation, which are usually manually annotated in the form of binary pixel-wise label maps. Manually annotated ground-truth label maps, more or less, contain errors for part of the pixels. Due to the thin structure of retina vessels, such errors are more frequent and serious in manual annotations, which negatively affect deep learning performance.
In this paper, we develop a new method to automatically and iteratively identify and correct such noisy segmentation labels in the process of network training. We consider historical predicted label maps of network-in-training from different epochs and jointly use them to self-supervise the predicted labels during training and dynamically correct the supervised labels with noises.
We conducted experiments on the three datasets of DRIVE, STARE and CHASE-DB1 with synthetic noises, pseudo-labeled noises, and manually labeled noises. For synthetic noise, the proposed method corrects the original noisy label maps to a more accurate label map by 4.0–$$9.8\%$$
on PR on three testing datasets. For the other two types of noise, the method could also improve the label map quality.
Experiment results verified that the proposed method could achieve better retinal image segmentation performance than many existing methods by simultaneously correcting the noise in the initial label map.
One of the outstanding analytical problems in X-ray single-particle imaging (SPI) is the classification of structural heterogeneity, which is especially difficult given the low signal-to-noise ratios of individual patterns and the fact that even identical objects can yield patterns that vary greatly when orientation is taken into consideration. Proposed here are two methods which explicitly account for this orientation-induced variation and can robustly determine the structural landscape of a sample ensemble. The first, termed common-line principal component analysis (PCA), provides a rough classification which is essentially parameter free and can be run automatically on any SPI dataset. The second method, utilizing variation auto-encoders (VAEs), can generate 3D structures of the objects at any point in the structural landscape. Both these methods are implemented in combination with the noise-tolerant expand–maximize–compress (EMC) algorithm and its utility is demonstrated by applying it to an experimental dataset from gold nanoparticles with only a few thousand photons per pattern. Both discrete structural classes and continuous deformations are recovered. These developments diverge from previous approaches of extracting reproducible subsets of patterns from a dataset and open up the possibility of moving beyond the study of homogeneous sample sets to addressing open questions on topics such as nanocrystal growth and dynamics, as well as phase transitions which have not been externally triggered.
Distant Supervision is an approach that allows automatic labeling of instances. This approach has been used in Relation Extraction. Still, the main challenge of this task is handling instances with noisy labels (e.g., when two entities in a sentence are automatically labeled with an invalid relation). The approaches reported in the literature addressed this problem by employing noise-tolerant classifiers. However, if a noise reduction stage is introduced before the classification step, this increases the macro precision values. This paper proposes an Adversarial Autoencoders-based approach for obtaining a new representation that allows noise reduction in Distant Supervision. The representation obtained using Adversarial Autoencoders minimize the intra-cluster distance concerning pre-trained embeddings and classic Autoencoders. Experiments demonstrated that in the noise-reduced datasets, the macro precision values obtained over the original dataset are similar using fewer instances considering the same classifier. For example, in one of the noise-reduced datasets, the macro precision was improved approximately 2.32% using 77% of the original instances. This suggests the validity of using Adversarial Autoencoders to obtain well-suited representations for noise reduction. Also, the proposed approach maintains the macro precision values concerning the original dataset and reduces the total instances needed for classification.