correlated information
Recently Published Documents


TOTAL DOCUMENTS

106
(FIVE YEARS 35)

H-INDEX

11
(FIVE YEARS 3)

Symmetry ◽  
2021 ◽  
Vol 13 (12) ◽  
pp. 2423
Author(s):  
Edgar Dmitriyev ◽  
Eugeniy Rogozhnikov ◽  
Natalia Duplishcheva ◽  
Serafim Novichkov

The growing demand for broadband Internet services is forcing scientists around the world to seek and develop new telecommunication technologies. With the transition from the fourth generation to the fifth generation wireless communication systems, one of these technologies is beamforming. The need for this technology was caused by the use of millimeter waves in data transmission. This frequency range is characterized by heavy path loss. The beamforming technology could compensate for this significant drawback. This paper discusses basic beamforming schemes and proposes a model implemented on the basis of QuaDRiGa. The model implements a MIMO channel using symmetrical antenna arrays. In addition, the methods for calculating the antenna weight coefficients based on the channel matrix are compared. The first well-known method is based on the addition of cluster responses to calculate the coefficients. The proposed one uses the singular value decomposition of the channel matrix into clusters to take into account the most correlated information between all clusters when calculating the antenna coefficients. According to the research results, the proposed method for calculating the antenna coefficients allows an increase in the SNR/SINR level by 8–10 dB on the receiving side in the case of analog beamforming with a known channel matrix.


2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Meiyu Huang ◽  
Yao Xu ◽  
Lixin Qian ◽  
Weili Shi ◽  
Yaqin Zhang ◽  
...  

The current interpretation technology of remote sensing images is mainly focused on single-modal data, which cannot fully utilize the complementary and correlated information of multimodal data with heterogeneous characteristics, especially for synthetic aperture radar (SAR) data and optical imagery. To solve this problem, we propose a bridge neural network- (BNN-) based optical-SAR image joint intelligent interpretation framework, optimizing the feature correlation between optical and SAR images through optical-SAR matching tasks. It adopts BNN to effectively improve the capability of common feature extraction of optical and SAR images and thus improving the accuracy and application scenarios of specific intelligent interpretation tasks for optical-SAR/SAR/optical images. Specifically, BNN projects optical and SAR images into a common feature space and mines their correlation through pair matching. Further, to deeply exploit the correlation between optical and SAR images and ensure the great representation learning ability of BNN, we build the QXS-SAROPT dataset containing 20,000 pairs of perfectly aligned optical-SAR image patches with diverse scenes of high resolutions. Experimental results on optical-to-SAR crossmodal object detection demonstrate the effectiveness and superiority of our framework. In particular, based on the QXS-SAROPT dataset, our framework can achieve up to 96% high accuracy on four benchmark SAR ship detection datasets.


Author(s):  
Ki-Cheol Yoon ◽  
Kwang Gi Kim

Abstract For diagnosis of the secondary lymphedema, amplitude mode (A-mode) examination using a single ultrasound probe has been suggested as one of possible diagnostic modalities due to its relatively low cost, ease of usage, and mobility. However, A-mode ultrasound waves with respect to time have lots of noise and are complicated to analyze and achieve well correlated information related to change in volume of each layer of skin and subcutaneous tissues. Thus, development of adequate ultrasound calibration phantom is needed. For this, fundamental study on proper phantom materials which show acoustic characteristics of skin and subcutaneous tissues are needed. In this research, the fabrication method for ultrasonic phantom using gelatin material is presented in a wide range of acoustic impedance and their acoustic characteristics and usability were discussed.


2021 ◽  
Vol 8 ◽  
Author(s):  
Palle Duun Rohde ◽  
Mette Nyegaard ◽  
Mads Kjolby ◽  
Peter Sørensen

Type 2 diabetes mellitus (T2DM) is continuously rising with more disease cases every year. T2DM is a chronic disease with many severe comorbidities and therefore remains a burden for the patient and the society. Disease prevention, early diagnosis, and stratified treatment are important elements in slowing down the increase in diabetes prevalence. T2DM has a substantial genetic component with an estimated heritability of 40–70%, and more than 500 genetic loci have been associated with T2DM. Because of the intrinsic genetic basis of T2DM, one tool for risk assessment is genome-wide genetic risk scores (GRS). Current GRS only account for a small proportion of the T2DM risk; thus, better methods are warranted for more accurate risk assessment. T2DM is correlated with several other diseases and complex traits, and incorporating this information by adjusting effect size of the included markers could improve risk prediction. The aim of this study was to develop multi-trait (MT)-GRS leveraging correlated information. We used phenotype and genotype information from the UK Biobank, and summary statistics from two independent T2DM studies. Marker effects for T2DM and seven correlated traits, namely, height, body mass index, pulse rate, diastolic and systolic blood pressure, smoking status, and information on current medication use, were estimated (i.e., by logistic and linear regression) within the UK Biobank. These summary statistics, together with the two independent training summary statistics, were incorporated into the MT-GRS prediction in different combinations. The prediction accuracy of the MT-GRS was improved by 12.5% compared to the single-trait GRS. Testing the MT-GRS strategy in two independent T2DM studies resulted in an elevated accuracy by 50–94%. Finally, combining the seven information traits with the two independent T2DM studies further increased the prediction accuracy by 34%. Across comparisons, body mass index and current medication use were the two traits that displayed the largest weights in construction of the MT-GRS. These results explicitly demonstrate the added benefit of leveraging correlated information when constructing genetic scores. In conclusion, constructing GRS not only based on the disease itself but incorporating genomic information from other correlated traits as well is strongly advisable for obtaining improved individual risk stratification.


2021 ◽  
Vol 11 ◽  
Author(s):  
Xiaojie Xia ◽  
Qing Gao ◽  
Xiaolin Ge ◽  
Zeyuan Liu ◽  
Xiaoke Di ◽  
...  

IntroductionRadiotherapy (RT) is the main treatment for unoperated esophageal cancer (EC) patients. It is controversial whether adding chemotherapy (CT) to RT is beneficial for elderly EC patients. The purpose of our study was to compare the efficacy of chemoradiotherapy (CRT) with RT alone for non-surgical elderly esophageal cancer patients.MethodsA total of 7,101 eligible EC patients older than 65 years diagnosed between 2000 and 2018 were collected from the Surveillance, Epidemiology, and End Results (SEER) database. All the samples were divided into the radiotherapy group and the chemoradiotherapy group. After being matched by propensity score matching (PSM) at a 1:1 ratio, 3,020 patients were included in our analysis. The Kaplan–Meier method and log-rank test were applied to compare overall survival (OS) and cancer-specific survival (CSS).ResultsAfter PSM, the clinical characteristics of patients between the RT and CRT groups were comparable. For EC patients older than 65 years, the 3-year OS and CSS in the CRT group were 21.8% and 27.4%, and the 5-year OS and CSS in the CRT group were 12.7% and 19.8%, respectively. The 3-year OS and CSS in the RT group were 6.4% and 10.4%, and the 5-year OS and CSS in the RT group were 3.5% and 7.2%, respectively. Next, these patients were divided into five subgroups based on the age stratification (ages 65–69; 70–74; 75–79; 80–84; ≥85). In each subgroup analysis, the 3- and 5-year OS and CSS showed significant benefits in the CRT group rather than in the RT group (all p < 0.05). We were unable to assess toxicities between the two groups due to a lack of correlated information.ConclusionsCRT could improve OS and CSS for non-surgical EC patients older than 65 years. Adding chemotherapy to radiation showed a significant prognostic advantage for elderly esophageal cancer patients.


2021 ◽  
Vol 5 (3) ◽  
pp. 155-160
Author(s):  
Domokos Máthé ◽  
Bálint Kiss ◽  
Bernadett Pályi ◽  
Zoltán Kis ◽  
László Forgách ◽  
...  

Abstract Imaging keeps pervading biomedical sciences from the nanoscale to the bedside. Connecting the hierarchical levels of biomedicine with relevant imaging approaches, however, remains a challenge. Here we present a concept, called “3M”, which can deliver a question, formulated at the bedside, across the wide-ranging hierarchical organization of the living organism, from the molecular level, through the small-animal scale, to whole-body human functional imaging. We present an example of nanoparticle development pipeline extending from atomic force microscopy to pre-clinical whole body imaging methods to highlight the essential features of the 3M concept, which integrates multi-scale resolution and quantification into a single logical process. Using the nanoscale to human clinical whole body approach, we present the successful development, characterisation and application of Prussian Blue nanoparticles for a variety of imaging modalities, extending it to isotope payload quantification and shape-biodistribution relationships. The translation of an idea from the bedside to the molecular level and back requires a set of novel combinatorial imaging methodologies interconnected into a logical pipeline. The proposed integrative molecules-to-mouse-to-man (3M) approach offers a promising, clinically oriented toolkit that lends the prospect of obtaining an ever-increasing amount of correlated information from as small a voxel of the human body as possible.


2021 ◽  
Vol 6 (1) ◽  
pp. 20
Author(s):  
Ilyas Ahmad Huqqani ◽  
Tay Lea Tien ◽  
Junita Mohamad-Saleh

Landslide is a natural disaster that occurs mostly in hill areas. Landslide hazard mapping is used to classify the prone areas to mitigate the risk of landslide hazards. This paper aims to compare spatial landslide prediction performance using an artificial neural network (ANN) model based on different data input configurations, different numbers of hidden neurons, and two types of normalization techniques on the data set of Penang Island, Malaysia. The data set involves twelve landslide influencing factors in which five factors are in continuous values, while the remaining seven are in categorical/discrete values. These factors are considered in three different configurations, i.e., original (OR), frequency ratio (FR), and mixed-type (MT) data, which act as an input to train the ANN model separately. A significant effect on the final output is the number of hidden neurons in the hidden layer. In addition, three data configurations are processed using two different normalization methods, i.e., mean-standard deviation (Mean-SD) and Min-Max. The landslide causative data often consist of correlated information caused by overlapping of input instances. Therefore, the principal component analysis (PCA) technique is used to eliminate the correlated information. The area under the receiver of characteristics (ROC) curve, i.e., AUC is also applied to verify the produced landslide hazard maps. The best result of AUC for both Mean-SD and Min-Max with PCA schemes are 96.72% and 96.38%, respectively. The results show that Mean-SD with PCA of MT data configuration yields the best validation accuracy, AUC, and lowest AIC at 100 number of hidden neurons. MT data configuration with the Mean-SD normalization and PCA scheme is more robust and stable in the MLP model's training for landslide prediction.  Keywords: Landslide; ANN; Hidden Neurons; Normalization; PCA; ROC; Hazard map   Copyright (c) 2021 Geosfera Indonesia and Department of Geography Education, University of Jember This work is licensed under a Creative Commons Attribution-Share A like 4.0 International License


Entropy ◽  
2021 ◽  
Vol 23 (3) ◽  
pp. 265
Author(s):  
Ran Tamir (Averbuch) ◽  
Neri Merhav

Typical random codes (TRCs) in a communication scenario of source coding with side information in the decoder is the main subject of this work. We study the semi-deterministic code ensemble, which is a certain variant of the ordinary random binning code ensemble. In this code ensemble, the relatively small type classes of the source are deterministically partitioned into the available bins in a one-to-one manner. As a consequence, the error probability decreases dramatically. The random binning error exponent and the error exponent of the TRCs are derived and proved to be equal to one another in a few important special cases. We show that the performance under optimal decoding can be attained also by certain universal decoders, e.g., the stochastic likelihood decoder with an empirical entropy metric. Moreover, we discuss the trade-offs between the error exponent and the excess-rate exponent for the typical random semi-deterministic code and characterize its optimal rate function. We show that for any pair of correlated information sources, both error and excess-rate probabilities exponential vanish when the blocklength tends to infinity.


IEEE Access ◽  
2021 ◽  
pp. 1-1
Author(s):  
Imanol Granada ◽  
Pedro M. Crespo ◽  
Mariano E. Burich ◽  
Javier Garcia-Frias

Sign in / Sign up

Export Citation Format

Share Document