reconstructed data
Recently Published Documents


TOTAL DOCUMENTS

48
(FIVE YEARS 21)

H-INDEX

7
(FIVE YEARS 3)

2022 ◽  
pp. 242-265
Author(s):  
Hema Nagaraja ◽  
Krishna Kant ◽  
K. Rajalakshmi

This paper investigates the hourly precipitation estimation capacities of ANN using raw data and reconstructed data using proposed Precipitation Sliding Window Period (PSWP) method. The precipitation data from 11 Automatic Weather Station (AWS) of Delhi has been obtained from Jan 2015 to Feb 2016. The proposed PSWP method uses both time and space dimension to fill the missing precipitation values. Hourly precipitation follows patterns in particular period along with its neighbor stations. Based on these patterns of precipitation, Local Cluster Sliding Window Period (LCSWP) and Global Cluster Sliding Window Period (GCSWP) are defined for single AWS and all AWSs respectively. Further, GCSWP period is classified into four different categories to fill the missing precipitation data based on patterns followed in it. The experimental results indicate that ANN trained with reconstructed data has better estimation results than the ANN trained with raw data. The average RMSE for ANN trained with raw data is 0.44 and while that for neural network trained with reconstructed data is 0.34.


2021 ◽  
Vol 12 ◽  
Author(s):  
Zhaoyang Ge ◽  
Huiqing Cheng ◽  
Zhuang Tong ◽  
Lihong Yang ◽  
Bing Zhou ◽  
...  

Remote ECG diagnosis has been widely used in the clinical ECG workflow. Especially for patients with pacemaker, in the limited information of patient's medical history, doctors need to determine whether the patient is wearing a pacemaker and also diagnose other abnormalities. An automatic detection pacing ECG method can help cardiologists reduce the workload and the rates of misdiagnosis. In this paper, we propose a novel autoencoder framework that can detect the pacing ECG from the remote ECG. First, we design a memory module in the traditional autoencoder. The memory module is to record and query the typical features of the training pacing ECG type. The framework does not directly feed features of the encoder into the decoder but uses the features to retrieve the most relevant items in the memory module. In the training process, the memory items are updated to represent the latent features of the input pacing ECG. In the detection process, the reconstruction data of the decoder is obtained by the fusion features in the memory module. Therefore, the reconstructed data of the decoder tends to be close to the pacing ECG. Meanwhile, we introduce an objective function based on the idea of metric learning. In the context of pacing ECG detection, comparing the error of objective function of the input data and reconstructed data can be used as an indicator of detection. According to the objective function, if the input data does not belong to pacing ECG, the objective function may get a large error. Furthermore, we introduce a new database named the pacing ECG database including 800 patients with a total of 8,000 heartbeats. Experimental results demonstrate that our method achieves an average F1-score of 0.918. To further validate the generalization of the proposed method, we also experiment on a widely used MIT-BIH arrhythmia database.


2021 ◽  
Vol 11 (18) ◽  
pp. 8775
Author(s):  
Wojciech Błachucki ◽  
Yves Kayser ◽  
Anna Wach ◽  
Rafał Fanselow ◽  
Christopher Milne ◽  
...  

Aqueous iron (III) oxide nanoparticles were irradiated with pure self-amplified spontaneous emission (SASE) X-ray free-electron laser (XFEL) pulses tuned to the energy around the Fe K-edge ionization threshold. For each XFEL shot, the incident X-ray pulse spectrum and Fe Kβ emission spectrum were measured synchronously with dedicated spectrometers and processed through a reconstruction algorithm allowing for the determination of Fe Kβ resonant X-ray emission spectroscopy (RXES) plane with high energy resolution. The influence of the number of X-ray shots employed in the experiment on the reconstructed data quality was evaluated, enabling the determination of thresholds for good data acquisition and experimental times essential for practical usage of scarce XFEL beam times.


Information ◽  
2021 ◽  
Vol 12 (3) ◽  
pp. 115
Author(s):  
Ahmad Saeed Mohammad ◽  
Dhafer Zaghar ◽  
Walaa Khalaf

With the development of mobile technology, the usage of media data has increased dramatically. Therefore, data reduction represents a research field to maintain valuable information. In this paper, a new scheme called Multi Chimera Transform (MCT) based on data reduction with high information preservation, which aims to improve the reconstructed data by producing three parameters from each 16×16 block of data, is proposed. MCT is a 2D transform that depends on constructing a codebook of 256 picked blocks from some selected images which have a low similarity. The proposed transformation was applied on solid and soft biometric modalities of AR database, giving high information preservation with small resulted file size. The proposed method produced outstanding performance compared with KLT and WT in terms of SSIM and PSNR. The highest SSIM was 0.87 for the proposed scheme MCT of the full image of AR database, while the existed method KLT and WT had 0.81 and 0.68, respectively. In addition, the highest PSNR was 27.23 dB for the proposed scheme on warp facial image of AR database, while the existed methods KLT and WT had 24.70 dB and 21.79 dB, respectively.


2021 ◽  
Author(s):  
Chang Liu ◽  
Akiyuki Kawasaki ◽  
Tomoko Shiroyama

<p>As the longest river in Asia, the Yangtze River has shown its impact on human societies with floods recorded since 12<sup>th</sup> century. In 1931, the Yangtze River has manifested its force again with one of the deadliest floods ever recorded in Chinese history, causing 422,499 casualties, damages to more than 25.2 million people and 58.7 billion m<sup>2</sup> farmland. The impact of the 1931 flood, resulting in the increment of rice price, has remained till 1933. Researches on the 1931 flood damage has shown its direct causation including political corruption, technical backwardness, and meteorological abnormality. However, in a long-term period, it is still unclear if the change of society has intensified the vulnerability of flood or some hydrological extremes has accelerated the social transformations. Here we propose a conceptual socio-hydrological framework within which the mutual influence between society and water system is analyzed. To address the issue of data scarcity, we applied the Water and Energy Budget-based Distributed Hydrological Model (WEB-DHM) to reconstruct the hydrological conditions in the early 20<sup>th</sup> century of China, based on which the potential rice production was estimated. With the reconstructed data, we found that the change of the social structure of villages aggravated the vulnerability of agricultural production towards natural hazards, and hydrological extremes speeded-up such structure change. Our results demonstrate how reconstructed data is likely to help comprehend a socio-hydrology system under a conceptual framework, shedding light on the inner correlation of a pre-industrial society like the early 20<sup>th</sup> century of China. We anticipate our study to be a starting point for more sophisticated socio-hydrological models, which will likely to be applicable to many other regions and times.</p>


2021 ◽  
pp. 000370282098784
Author(s):  
James Renwick Beattie ◽  
Francis Esmonde-White

Spectroscopy rapidly captures a large amount of data that is not directly interpretable. Principal Components Analysis (PCA) is widely used to simplify complex spectral datasets into comprehensible information by identifying recurring patterns in the data with minimal loss of information. The linear algebra underpinning PCA is not well understood by many applied analytical scientists and spectroscopists who use PCA. The meaning of features identified through PCA are often unclear. This manuscript traces the journey of the spectra themselves through the operations behind PCA, with each step illustrated by simulated spectra. PCA relies solely on the information within the spectra, consequently the mathematical model is dependent on the nature of the data itself. The direct links between model and spectra allow concrete spectroscopic explanation of PCA, such the scores representing ‘concentration’ or ‘weights’. The principal components (loadings) are by definition hidden, repeated and uncorrelated spectral shapes that linearly combine to generate the observed spectra. They can be visualized as subtraction spectra between extreme differences within the dataset. Each PC is shown to be a successive refinement of the estimated spectra, improving the fit between PC reconstructed data and the original data. Understanding the data-led development of a PCA model shows how to interpret application specific chemical meaning of the PCA loadings and how to analyze scores. A critical benefit of PCA is its simplicity and the succinctness of its description of a dataset, making it powerful and flexible.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Yawen Jiang ◽  
Weiyi Ni

Abstract Background The objectives of the present study were to evaluate the performance of a time-to-event data reconstruction method, to assess the bias and efficiency of unanchored matching-adjusted indirect comparison (MAIC) methods for the analysis of time-to-event outcomes, and to propose an approach to adjust the bias of unanchored MAIC when omitted confounders across trials may exist. Methods To evaluate the methods using a Monte Carlo approach, a thousand repetitions of simulated data sets were generated for two single-arm trials. In each repetition, researchers were assumed to have access to individual-level patient data (IPD) for one of the trials and the published Kaplan-Meier curve of another. First, we compared the raw data and the reconstructed IPD using Cox regressions to determine the performance of the data reconstruction method. Then, we evaluated alternative unanchored MAIC strategies with varying completeness of covariates for matching in terms of bias, efficiency, and confidence interval coverage. Finally, we proposed a bias factor-adjusted approach to gauge the true effects when unanchored MAIC estimates might be biased due to omitted variables. Results Reconstructed data sufficiently represented raw data in the sense that the difference between the raw and reconstructed data was not statistically significant over the one thousand repetitions. Also, the bias of unanchored MAIC estimates ranged from minimal to substantial as the set of covariates became less complete. More, the confidence interval estimates of unanchored MAIC were suboptimal even using the complete set of covariates. Finally, the bias factor-adjusted method we proposed substantially reduced omitted variable bias. Conclusions Unanchored MAIC should be used to analyze time-to-event outcomes with caution. The bias factor may be used to gauge the true treatment effect.


2020 ◽  
Vol 12 (17) ◽  
pp. 2747
Author(s):  
Hamid Reza Ghafarian Malamiri ◽  
Hadi Zare ◽  
Iman Rousta ◽  
Haraldur Olafsson ◽  
Emma Izquierdo Verdiguier ◽  
...  

Monitoring vegetation changes over time is very important in dry areas such as Iran, given its pronounced drought-prone agricultural system. Vegetation indices derived from remotely sensed satellite imageries are successfully used to monitor vegetation changes at various scales. Atmospheric dust as well as airborne particles, particularly gases and clouds, significantly affect the reflection of energy from the surface, especially in visible, short and infrared wavelengths. This results in imageries with missing data (gaps) and outliers while vegetation change analysis requires integrated and complete time series data. This study investigated the performance of HANTS (Harmonic ANalysis of Time Series) algorithm and (M)-SSA ((Multi-channel) Singular Spectrum Analysis) algorithm in reconstruction of wide-gap of missing data. The time series of Normalized Difference Vegetation Index (NDVI) retrieved from Landsat TM in combination with 250m MODIS NDVI time image products are used to simulate and find periodic components of the NDVI time series from 1986 to 2000 and from 2000 to 2015, respectively. This paper presents the evaluation of the performance of gap filling capability of HANTS and M-SSA by filling artificially created gaps in data using Landsat and MODIS data. The results showed that the RMSEs (Root Mean Square Errors) between the original and reconstructed data in HANTS and M-SSA algorithms were 0.027 and 0.023 NDVI value, respectively. Further, RMSEs among 15 NDVI images extracted from the time series artificially and reconstructed by HANTS and M-SSA algorithms were 0.030 and 0.025 NDVI value, respectively. RMSEs of the original and reconstructed data in HANTS and M-SSA algorithms were 0.10 and 0.04 for time series 6, respectively. The findings of this study present a favorable option for solving the missing data challenge in NDVI time series.


Sign in / Sign up

Export Citation Format

Share Document