scholarly journals PercepPan: Towards Unsupervised Pan-Sharpening Based on Perceptual Loss

2020 ◽  
Vol 12 (14) ◽  
pp. 2318 ◽  
Author(s):  
Changsheng Zhou ◽  
Jiangshe Zhang ◽  
Junmin Liu ◽  
Chunxia Zhang ◽  
Rongrong Fei ◽  
...  

In the literature of pan-sharpening based on neural networks, high resolution multispectral images as ground-truth labels generally are unavailable. To tackle the issue, a common method is to degrade original images into a lower resolution space for supervised training under the Wald’s protocol. In this paper, we propose an unsupervised pan-sharpening framework, referred to as “perceptual pan-sharpening”. This novel method is based on auto-encoder and perceptual loss, and it does not need the degradation step for training. For performance boosting, we also suggest a novel training paradigm, called “first supervised pre-training and then unsupervised fine-tuning”, to train the unsupervised framework. Experiments on the QuickBird dataset show that the framework with different generator architectures could get comparable results with the traditional supervised counterpart, and the novel training paradigm performs better than random initialization. When generalizing to the IKONOS dataset, the unsupervised framework could still get competitive results over the supervised ones.

2021 ◽  
Author(s):  
Charlotte Burup Kristensen ◽  
Katrine Aagaard Myhr ◽  
Frederik Fasth Grund ◽  
Niels Vejlstrup ◽  
Christian Hassager ◽  
...  

Abstract Purpose Increased left ventricular mass (LVM) is a strong independent predictor for adverse cardiovascular events, but conventional echocardiographic methods used to assess and monitor individuals are limited by poor reproducibility and accuracy. We aimed to develop an echocardiographic method for LVM-quantification that is simple, reproducible and accurate. Methods The novel method adds the mean wall thickness to the left ventricular end-diastolic volume acquired using the biplane model of discs. The mean wall thickness is acquired from the parasternal short axis view. Cardiac assessment was performed using echocardiography followed immediately by cardiac magnetic resonance in 85 subjects with different left ventricular geometries, ranging from patients with various cardiac disorders (n=41) to individuals without known cardiac disorders (n=44). We compared the novel two-dimensional (2D) method to various conventional one-dimensional (1D) and 2D methods as well as three-dimensional (3D) echocardiography. Results The novel method had better reproducibility in intra-examiner (coefficients of variation (CV) 9% vs. 11-14%) and inter-examiner analysis (CV 9% vs. 10-20%) than the other methods. Accuracy of the novel method was similar to 3D (mean difference±95% limits of agreement, CV): Novel: 2±50g,15% vs. 3D: 2±51g, 16%; and better than the 1D-method by Devereux (7±76g, 23%). Conclusion The novel 2D-based method for LVM-quantification had better reproducibility than the other echocardiographic methods. Accuracy was similar to 3D and better than conventional methods. As endocardial tracings using the biplane model forms part of the standard echocardiographic protocol, the novel method can easily be integrated into any echocardiographic software, without substantially increasing analysis time.


Molecules ◽  
2019 ◽  
Vol 25 (1) ◽  
pp. 152 ◽  
Author(s):  
Shaolong Zhu ◽  
Jinyu Zhang ◽  
Maoni Chao ◽  
Xinjuan Xu ◽  
Puwen Song ◽  
...  

Convolutional neural network (CNN) can be used to quickly identify crop seed varieties. 1200 seeds of ten soybean varieties were selected, hyperspectral images of both the front and the back of the seeds were collected, and the reflectance of soybean was derived from the hyperspectral images. A total of 9600 images were obtained after data augmentation, and the images were divided into a training set, validation set, and test set with a 3:1:1 ratio. Pretrained models (AlexNet, ResNet18, Xception, InceptionV3, DenseNet201, and NASNetLarge) after fine-tuning were used for transfer training. The optimal CNN model for soybean seed variety identification was selected. Furthermore, the traditional machine learning models for soybean seed variety identification were established by using reflectance as input. The results show that the six models all achieved 91% accuracy in the validation set and achieved accuracy values of 90.6%, 94.5%, 95.4%, 95.6%, 96.8%, and 97.2%, respectively, in the test set. This method is better than the identification of soybean seed varieties based on hyperspectral reflectance. The experimental results support a novel method for identifying soybean seeds rapidly and accurately, and this method also provides a good reference for the identification of other crop seeds.


TAPPI Journal ◽  
2012 ◽  
Vol 11 (10) ◽  
pp. 9-17
Author(s):  
ALESSANDRA GERLI ◽  
LEENDERT C. EIGENBROOD

A novel method was developed for the determination of linting propensity of paper based on printing with an IGT printability tester and image analysis of the printed strips. On average, the total fraction of the surface removed as lint during printing is 0.01%-0.1%. This value is lower than those reported in most laboratory printing tests, and more representative of commercial offset printing applications. Newsprint paper produced on a roll/blade former machine was evaluated for linting propensity using the novel method and also printed on a commercial coldset offset press. Laboratory and commercial printing results matched well, showing that linting was higher for the bottom side of paper than for the top side, and that linting could be reduced on both sides by application of a dry-strength additive. In a second case study, varying wet-end conditions were used on a hybrid former machine to produce four paper reels, with the goal of matching the low linting propensity of the paper produced on a machine with gap former configuration. We found that the retention program, by improving fiber fines retention, substantially reduced the linting propensity of the paper produced on the hybrid former machine. The papers were also printed on a commercial coldset offset press. An excellent correlation was found between the total lint area removed from the bottom side of the paper samples during laboratory printing and lint collected on halftone areas of the first upper printing unit after 45000 copies. Finally, the method was applied to determine the linting propensity of highly filled supercalendered paper produced on a hybrid former machine. In this case, the linting propensity of the bottom side of paper correlated with its ash content.


2021 ◽  
Vol 9 (1) ◽  
Author(s):  
Aysen Degerli ◽  
Mete Ahishali ◽  
Mehmet Yamac ◽  
Serkan Kiranyaz ◽  
Muhammad E. H. Chowdhury ◽  
...  

AbstractComputer-aided diagnosis has become a necessity for accurate and immediate coronavirus disease 2019 (COVID-19) detection to aid treatment and prevent the spread of the virus. Numerous studies have proposed to use Deep Learning techniques for COVID-19 diagnosis. However, they have used very limited chest X-ray (CXR) image repositories for evaluation with a small number, a few hundreds, of COVID-19 samples. Moreover, these methods can neither localize nor grade the severity of COVID-19 infection. For this purpose, recent studies proposed to explore the activation maps of deep networks. However, they remain inaccurate for localizing the actual infestation making them unreliable for clinical use. This study proposes a novel method for the joint localization, severity grading, and detection of COVID-19 from CXR images by generating the so-called infection maps. To accomplish this, we have compiled the largest dataset with 119,316 CXR images including 2951 COVID-19 samples, where the annotation of the ground-truth segmentation masks is performed on CXRs by a novel collaborative human–machine approach. Furthermore, we publicly release the first CXR dataset with the ground-truth segmentation masks of the COVID-19 infected regions. A detailed set of experiments show that state-of-the-art segmentation networks can learn to localize COVID-19 infection with an F1-score of 83.20%, which is significantly superior to the activation maps created by the previous methods. Finally, the proposed approach achieved a COVID-19 detection performance with 94.96% sensitivity and 99.88% specificity.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Christian Crouzet ◽  
Gwangjin Jeong ◽  
Rachel H. Chae ◽  
Krystal T. LoPresti ◽  
Cody E. Dunn ◽  
...  

AbstractCerebral microhemorrhages (CMHs) are associated with cerebrovascular disease, cognitive impairment, and normal aging. One method to study CMHs is to analyze histological sections (5–40 μm) stained with Prussian blue. Currently, users manually and subjectively identify and quantify Prussian blue-stained regions of interest, which is prone to inter-individual variability and can lead to significant delays in data analysis. To improve this labor-intensive process, we developed and compared three digital pathology approaches to identify and quantify CMHs from Prussian blue-stained brain sections: (1) ratiometric analysis of RGB pixel values, (2) phasor analysis of RGB images, and (3) deep learning using a mask region-based convolutional neural network. We applied these approaches to a preclinical mouse model of inflammation-induced CMHs. One-hundred CMHs were imaged using a 20 × objective and RGB color camera. To determine the ground truth, four users independently annotated Prussian blue-labeled CMHs. The deep learning and ratiometric approaches performed better than the phasor analysis approach compared to the ground truth. The deep learning approach had the most precision of the three methods. The ratiometric approach has the most versatility and maintained accuracy, albeit with less precision. Our data suggest that implementing these methods to analyze CMH images can drastically increase the processing speed while maintaining precision and accuracy.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Maximiliano Martín Aballay ◽  
Natalia Cristina Aguirre ◽  
Carla Valeria Filippi ◽  
Gabriel Hugo Valentini ◽  
Gerardo Sánchez

AbstractThe advance of Next Generation Sequencing (NGS) technologies allows high-throughput genotyping at a reasonable cost, although, in the case of peach, this technology has been scarcely developed. To date, only a standard Genotyping by Sequencing approach (GBS), based on a single restriction with ApeKI to reduce genome complexity, has been applied in peach. In this work, we assessed the performance of the double-digest RADseq approach (ddRADseq), by testing 6 double restrictions with the restriction profile generated with ApeKI. The enzyme pair PstI/MboI retained the highest number of loci in concordance with the in silico analysis. Under this condition, the analysis of a diverse germplasm collection (191 peach genotypes) yielded 200,759,000 paired-end (2 × 250 bp) reads that allowed the identification of 113,411 SNP, 13,661 InDel and 2133 SSR. We take advantage of a wide sample set to describe technical scope of the platform. The novel platform presented here represents a useful tool for genomic-based breeding for peach.


Author(s):  
Zaheer Ahmed ◽  
Alberto Cassese ◽  
Gerard van Breukelen ◽  
Jan Schepers

AbstractWe present a novel method, REMAXINT, that captures the gist of two-way interaction in row by column (i.e., two-mode) data, with one observation per cell. REMAXINT is a probabilistic two-mode clustering model that yields two-mode partitions with maximal interaction between row and column clusters. For estimation of the parameters of REMAXINT, we maximize a conditional classification likelihood in which the random row (or column) main effects are conditioned out. For testing the null hypothesis of no interaction between row and column clusters, we propose a $$max-F$$ m a x - F test statistic and discuss its properties. We develop a Monte Carlo approach to obtain its sampling distribution under the null hypothesis. We evaluate the performance of the method through simulation studies. Specifically, for selected values of data size and (true) numbers of clusters, we obtain critical values of the $$max-F$$ m a x - F statistic, determine empirical Type I error rate of the proposed inferential procedure and study its power to reject the null hypothesis. Next, we show that the novel method is useful in a variety of applications by presenting two empirical case studies and end with some concluding remarks.


Languages ◽  
2021 ◽  
Vol 6 (3) ◽  
pp. 123
Author(s):  
Thomas A. Leddy-Cecere

The Arabic dialectology literature repeatedly asserts the existence of a macro-level classificatory relationship binding the Arabic speech varieties of the combined Egypto-Sudanic area. This proposal, though oft-encountered, has not previously been formulated in reference to extensive linguistic criteria, but is instead framed primarily on the nonlinguistic premise of historical demographic and genealogical relationships joining the Arabic-speaking communities of the region. The present contribution provides a linguistically based evaluation of this proposed dialectal grouping, to assess whether the postulated dialectal unity is meaningfully borne out by available language data. Isoglosses from the domains of segmental phonology, phonological processes, pronominal morphology, verbal inflection, and syntax are analyzed across six dialects representing Arabic speech in the region. These are shown to offer minimal support for a unified Egypto-Sudanic dialect classification, but instead to indicate a significant north–south differentiation within the sample—a finding further qualified via application of the novel method of Historical Glottometry developed by François and Kalyan. The investigation concludes with reflection on the implications of these results on the understandings of the correspondence between linguistic and human genealogical relationships in the history of Arabic and in dialectological practice more broadly.


Author(s):  
Xuhai Xu ◽  
Ebrahim Nemati ◽  
Korosh Vatanparvar ◽  
Viswam Nathan ◽  
Tousif Ahmed ◽  
...  

The prevalence of ubiquitous computing enables new opportunities for lung health monitoring and assessment. In the past few years, there have been extensive studies on cough detection using passively sensed audio signals. However, the generalizability of a cough detection model when applied to external datasets, especially in real-world implementation, is questionable and not explored adequately. Beyond detecting coughs, researchers have looked into how cough sounds can be used in assessing lung health. However, due to the challenges in collecting both cough sounds and lung health condition ground truth, previous studies have been hindered by the limited datasets. In this paper, we propose Listen2Cough to address these gaps. We first build an end-to-end deep learning architecture using public cough sound datasets to detect coughs within raw audio recordings. We employ a pre-trained MobileNet and integrate a number of augmentation techniques to improve the generalizability of our model. Without additional fine-tuning, our model is able to achieve an F1 score of 0.948 when tested against a new clean dataset, and 0.884 on another in-the-wild noisy dataset, leading to an advantage of 5.8% and 8.4% on average over the best baseline model, respectively. Then, to mitigate the issue of limited lung health data, we propose to transform the cough detection task to lung health assessment tasks so that the rich cough data can be leveraged. Our hypothesis is that these tasks extract and utilize similar effective representation from cough sounds. We embed the cough detection model into a multi-instance learning framework with the attention mechanism and further tune the model for lung health assessment tasks. Our final model achieves an F1-score of 0.912 on healthy v.s. unhealthy, 0.870 on obstructive v.s. non-obstructive, and 0.813 on COPD v.s. asthma classification, outperforming the baseline by 10.7%, 6.3%, and 3.7%, respectively. Moreover, the weight value in the attention layer can be used to identify important coughs highly correlated with lung health, which can potentially provide interpretability for expert diagnosis in the future.


2021 ◽  
Vol 13 (9) ◽  
pp. 4648
Author(s):  
Rana Muhammad Adnan ◽  
Kulwinder Singh Parmar ◽  
Salim Heddam ◽  
Shamsuddin Shahid ◽  
Ozgur Kisi

The accurate estimation of suspended sediments (SSs) carries significance in determining the volume of dam storage, river carrying capacity, pollution susceptibility, soil erosion potential, aquatic ecological impacts, and the design and operation of hydraulic structures. The presented study proposes a new method for accurately estimating daily SSs using antecedent discharge and sediment information. The novel method is developed by hybridizing the multivariate adaptive regression spline (MARS) and the Kmeans clustering algorithm (MARS–KM). The proposed method’s efficacy is established by comparing its performance with the adaptive neuro-fuzzy system (ANFIS), MARS, and M5 tree (M5Tree) models in predicting SSs at two stations situated on the Yangtze River of China, according to the three assessment measurements, RMSE, MAE, and NSE. Two modeling scenarios are employed; data are divided into 50–50% for model training and testing in the first scenario, and the training and test data sets are swapped in the second scenario. In Guangyuan Station, the MARS–KM showed a performance improvement compared to ANFIS, MARS, and M5Tree methods in term of RMSE by 39%, 30%, and 18% in the first scenario and by 24%, 22%, and 8% in the second scenario, respectively, while the improvement in RMSE of ANFIS, MARS, and M5Tree was 34%, 26%, and 27% in the first scenario and 7%, 16%, and 6% in the second scenario, respectively, at Beibei Station. Additionally, the MARS–KM models provided much more satisfactory estimates using only discharge values as inputs.


Sign in / Sign up

Export Citation Format

Share Document