redundant information
Recently Published Documents


TOTAL DOCUMENTS

295
(FIVE YEARS 77)

H-INDEX

20
(FIVE YEARS 4)

2022 ◽  
Vol 22 (2) ◽  
pp. 1-15
Author(s):  
Tu N. Nguyen ◽  
Sherali Zeadally

Conventional data collection methods that use Wireless Sensor Networks (WSNs) suffer from disadvantages such as deployment location limitation, geographical distance, as well as high construction and deployment costs of WSNs. Recently, various efforts have been promoting mobile crowd-sensing (such as a community with people using mobile devices) as a way to collect data based on existing resources. A Mobile Crowd-Sensing System can be considered as a Cyber-Physical System (CPS), because it allows people with mobile devices to collect and supply data to CPSs’ centers. In practical mobile crowd-sensing applications, due to limited budgets for the different expenditure categories in the system, it is necessary to minimize the collection of redundant information to save more resources for the investor. We study the problem of selecting participants in Mobile Crowd-Sensing Systems without redundant information such that the number of users is minimized and the number of records (events) reported by users is maximized, also known as the Participant-Report-Incident Redundant Avoidance (PRIRA) problem. We propose a new approximation algorithm, called the Maximum-Participant-Report Algorithm (MPRA) to solve the PRIRA problem. Through rigorous theoretical analysis and experimentation, we demonstrate that our proposed method performs well within reasonable bounds of computational complexity.


2022 ◽  
Author(s):  
Sagnik Banerjee ◽  
Carson Andorf

Advancement in technology has enabled sequencing machines to produce vast amounts of genetic data, causing an increase in storage demands. Most genomic software utilizes read alignments for several purposes including transcriptome assembly and gene count estimation. Herein we present, ABRIDGE, a state-of-the-art compressor for SAM alignment files offering users both lossless and lossy compression options. This reference-based file compressor achieves the best compression ratio among all compression software ensuring lower space demand and faster file transmission. Central to the software is a novel algorithm that retains non-redundant information. This new approach has allowed ABRIDGE to achieve a compression 16% higher than the second-best compressor for RNA-Seq reads and over 35% for DNA-Seq reads. ABRIDGE also offers users the option to randomly access location without having to decompress the entire file. ABRIDGE is distributed under MIT license and can be obtained from GitHub and docker hub. We anticipate that the user community will adopt ABRIDGE within their existing pipeline encouraging further research in this domain.


2021 ◽  
Vol 6 (2 (114)) ◽  
pp. 59-70
Author(s):  
Pylyp Prystavka ◽  
Kseniia Dukhnovska ◽  
Oksana Kovtun ◽  
Olga Leshchenko ◽  
Olha Cholyshkina ◽  
...  

The information technology that implements evaluation of redundant information using the methods of preprocessing and segmentation of digital images has been devised. The metrics for estimating redundant information containing a photo image using the approach based on texture variability were proposed. Using the example of aerial photography data, practical testing and research into the proposed assessment were carried out. Digital images, formed by various optoelectronic facilities, are distorted under the influence of obstacles of various nature. These obstacles complicate both the visual analysis of images by a human and their automatic processing. A solution to the problem can be obtained through preprocessing, which will lead to an increase in the informativeness of digital image data at a general decrease in content. An experimental study of the dependence of image informativeness on the results of overlaying previous filters for processing digital images, depending on the values of parameters of methods, was carried out. It was established that the use of algorithms sliding window analysis can significantly increase the resolution of analysis in the time area while maintaining a fairly high ability in the frequency area. The introduced metrics can be used in problems of computer vision, machine and deep learning, in devising information technologies for image recognition. The prospect is the task of increasing the efficiency of processing the monitoring results by automating the processing of the received data in order to identify informative areas. This will reduce the time of visual data analysis. The introduced metrics can be used in the development of automated systems of air surveillance data recognition.


2021 ◽  
Vol 22 (24) ◽  
pp. 13526
Author(s):  
Felix Broecker

The evolutionary origin of the genome remains elusive. Here, I hypothesize that its first iteration, the protogenome, was a multi-ribozyme RNA. It evolved, likely within liposomes (the protocells) forming in dry-wet cycling environments, through the random fusion of ribozymes by a ligase and was amplified by a polymerase. The protogenome thereby linked, in one molecule, the information required to seed the protometabolism (a combination of RNA-based autocatalytic sets) in newly forming protocells. If this combination of autocatalytic sets was evolutionarily advantageous, the protogenome would have amplified in a population of multiplying protocells. It likely was a quasispecies with redundant information, e.g., multiple copies of one ribozyme. As such, new functionalities could evolve, including a genetic code. Once one or more components of the protometabolism were templated by the protogenome (e.g., when a ribozyme was replaced by a protein enzyme), and/or addiction modules evolved, the protometabolism became dependent on the protogenome. Along with increasing fidelity of the RNA polymerase, the protogenome could grow, e.g., by incorporating additional ribozyme domains. Finally, the protogenome could have evolved into a DNA genome with increased stability and storage capacity. I will provide suggestions for experiments to test some aspects of this hypothesis, such as evaluating the ability of ribozyme RNA polymerases to generate random ligation products and testing the catalytic activity of linked ribozyme domains.


Gesture ◽  
2021 ◽  
Vol 20 (1) ◽  
pp. 103-134
Author(s):  
Izidor Mlakar ◽  
Matej Rojc ◽  
Simona Majhenič ◽  
Darinka Verdonik

Abstract The research proposed in this paper focuses on pragmatic interlinks between discourse markers and non-verbal behavior. Although non-verbal behavior is recognized to add non-redundant information and social interaction is not merely recognized as the transmission of words and sentences, the evidence regarding grammatical/linguistic interlinks between verbal and non-verbal concepts are vague and limited to restricted domains. This is even more evident when non-verbal behavior acts in the foreground but contributes to the structure and organization of the discourse. This research focuses on investigating the multimodal nature of discourse markers by observing their linguistic and paralinguistic properties in informal discourse. We perform a quantitative analysis with case studies for representative cases. The results show that discourse markers and background non-verbal behavior tend to follow a similar functionality in interaction. Therefore, by examining them together, one gains more insight into their true intent despite the high multifunctionality of both non-verbal behavior and DMs.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7731
Author(s):  
Emmanuel Pintelas ◽  
Ioannis E. Livieris ◽  
Panagiotis E. Pintelas

Deep convolutional neural networks have shown remarkable performance in the image classification domain. However, Deep Learning models are vulnerable to noise and redundant information encapsulated into the high-dimensional raw input images, leading to unstable and unreliable predictions. Autoencoders constitute an unsupervised dimensionality reduction technique, proven to filter out noise and redundant information and create robust and stable feature representations. In this work, in order to resolve the problem of DL models’ vulnerability, we propose a convolutional autoencoder topological model for compressing and filtering out noise and redundant information from initial high dimensionality input images and then feeding this compressed output into convolutional neural networks. Our results reveal the efficiency of the proposed approach, leading to a significant performance improvement compared to Deep Learning models trained with the initial raw images.


Author(s):  
Jin P. Gerlach ◽  
Ronald T. Cenfetelli

Over the years, the number of digital technologies that individuals use in their work and nonwork lives has increased significantly. These different technologies are often subject to interactions and interdependencies among them, which creates new challenges and opportunities for individuals. For instance, multiple digital technologies might be incompatible or offer redundant information to individuals. In this research, we offer a framework that can help scholars to study phenomena that involve multiple digital technologies and can assist designers and developers in making design decisions that facilitate beneficial interactions between technologies and mitigate undesirable ones.


2021 ◽  
Vol 12 ◽  
Author(s):  
Cristina Vargas ◽  
Sergio Moreno-Ríos

At intersections, drivers need to infer which ways are allowed by interpreting mandatory and/or prohibitory traffic signs. Time and accuracy in this decision-making process are crucial factors to avoid accidents. Previous studies show that integrating information from prohibitory signs is generally more difficult than from mandatory signs. In Study 1, we compare combined redundant signalling conditions to simple sign conditions at three-way intersections. In Study 2, we carried out a survey among professionals responsible for signposting to test whether common practices are consistent with experimental research. In Study 1, an experimental task was applied (n=24), and in Study 2, the survey response rate was 17%. These included the main cities in Spain such as Madrid and Barcelona. Study 1 showed that inferences with mandatory signs are faster than those with prohibitory signs, and redundant information is an improvement only on prohibitory signs. In Study 2, prohibitory signs were those most frequently chosen by professionals responsible for signposting. In conclusion, the most used signs, according to the laboratory study, were not the best ones for signposting because the faster responses were obtained for mandatory signs, and in second place for redundant signs.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1377
Author(s):  
Nicolás Mirkin ◽  
Diego A. Wisniacki

Quantum Darwinism (QD) is the process responsible for the proliferation of redundant information in the environment of a quantum system that is being decohered. This enables independent observers to access separate environmental fragments and reach consensus about the system’s state. In this work, we study the effect of disorder in the emergence of QD and find that a highly disordered environment is greatly beneficial for it. By introducing the notion of lack of redundancy to quantify objectivity, we show that it behaves analogously to the entanglement entropy (EE) of the environmental eigenstate taken as an initial state. This allows us to estimate the many-body mobility edge by means of our Darwinistic measure, implicating the existence of a critical degree of disorder beyond which the degree of objectivity rises the larger the environment is. The latter hints the key role that disorder may play when the environment is of a thermodynamic size. At last, we show that a highly disordered evolution may reduce the spoiling of redundancy in the presence of intra-environment interactions.


Micromachines ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 1271
Author(s):  
Hongmin Gao ◽  
Yiyan Zhang ◽  
Yunfei Zhang ◽  
Zhonghao Chen ◽  
Chenming Li ◽  
...  

In recent years, hyperspectral image classification (HSI) has attracted considerable attention. Various methods based on convolution neural networks have achieved outstanding classification results. However, most of them exited the defects of underutilization of spectral-spatial features, redundant information, and convergence difficulty. To address these problems, a novel 3D-2D multibranch feature fusion and dense attention network are proposed for HSI classification. Specifically, the 3D multibranch feature fusion module integrates multiple receptive fields in spatial and spectral dimensions to obtain shallow features. Then, a 2D densely connected attention module consists of densely connected layers and spatial-channel attention block. The former is used to alleviate the gradient vanishing and enhance the feature reuse during the training process. The latter emphasizes meaningful features and suppresses the interfering information along the two principal dimensions: channel and spatial axes. The experimental results on four benchmark hyperspectral images datasets demonstrate that the model can effectively improve the classification performance with great robustness.


Sign in / Sign up

Export Citation Format

Share Document