scholarly journals A negative storage model for precise but compact storage of genetic variation data

Database ◽  
2020 ◽  
Vol 2020 ◽  
Author(s):  
Guillermo Gonzalez-Calderon ◽  
Ruizheng Liu ◽  
Rodrigo Carvajal ◽  
Jamie K Teer

Abstract Falling sequencing costs and large initiatives are resulting in increasing amounts of data available for investigator use. However, there are informatics challenges in being able to access genomic data. Performance and storage are well-appreciated issues, but precision is critical for meaningful analysis and interpretation of genomic data. There is an inherent accuracy vs. performance trade-off with existing solutions. The most common approach (Variant-only Storage Model, VOSM) stores only variant data. Systems must therefore assume that everything not variant is reference, sacrificing precision and potentially accuracy. A more complete model (Full Storage Model, FSM) would store the state of every base (variant, reference and missing) in the genome thereby sacrificing performance. A compressed variation of the FSM can store the state of contiguous regions of the genome as blocks (Block Storage Model, BLSM), much like the file-based gVCF model. We propose a novel approach by which this state is encoded such that both performance and accuracy are maintained. The Negative Storage Model (NSM) can store and retrieve precise genomic state from different sequencing sources, including clinical and whole exome sequencing panels. Reduced storage requirements are achieved by storing only the variant and missing states and inferring the reference state. We evaluate the performance characteristics of FSM, BLSM and NSM and demonstrate dramatic improvements in storage and performance using the NSM approach.

2014 ◽  
Vol 519-520 ◽  
pp. 9-12
Author(s):  
Ming Chu Li ◽  
Liang Zhang ◽  
Cheng Guo

The paper focused on the construction of an efficient DPDP for public audit. We improved the existing proof of storage model by manipulating authenticated skip list structure for authentication. We further explored embedded MHT structure helping our scheme to accurate locate the incorrect part in batch auditing. Extensive security and performance evaluation showed the proposed model is highly efficient and a nice trade-off between robust construction and storage cost.


2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Abdu Gumaei ◽  
Rachid Sammouda ◽  
Abdul Malik S. Al-Salman ◽  
Ahmed Alsanad

Multispectral palmprint recognition system (MPRS) is an essential technology for effective human identification and verification tasks. To improve the accuracy and performance of MPRS, a novel approach based on autoencoder (AE) and regularized extreme learning machine (RELM) is proposed in this paper. The proposed approach is intended to make the recognition faster by reducing the number of palmprint features without degrading the accuracy of classifier. To achieve this objective, first, the region of interest (ROI) from palmprint images is extracted by David Zhang’s method. Second, an efficient normalized Gist (NGist) descriptor is used for palmprint feature extraction. Then, the dimensionality of extracted features is reduced using optimized AE. Finally, the reduced features are fed to the RELM for classification. A comprehensive set of experiments are conducted on the benchmark MS-PolyU dataset. The results were significantly high compared to the state-of-the-art approaches, and the robustness and efficiency of the proposed approach are revealed.


Author(s):  
Luis Cláudio de Jesus-Silva ◽  
Antônio Luiz Marques ◽  
André Luiz Nunes Zogahib

This article aims to examine the variable compensation program for performance implanted in the Brazilian Judiciary. For this purpose, a survey was conducted with the servers of the Court of Justice of the State of Roraima - Amazon - Brazil. The strategy consisted of field research with quantitative approach, with descriptive and explanatory research and conducting survey using a structured questionnaire, available through the INTERNET. The population surveyed, 37.79% is the sample. The results indicate the effectiveness of the program as a tool of motivation and performance improvement and also the need for some adjustments and improvements, especially on the perception of equity of the program and the distribution of rewards.


Author(s):  
Jaber Almutairi ◽  
Mohammad Aldossary

AbstractRecently, the number of Internet of Things (IoT) devices connected to the Internet has increased dramatically as well as the data produced by these devices. This would require offloading IoT tasks to release heavy computation and storage to the resource-rich nodes such as Edge Computing and Cloud Computing. Although Edge Computing is a promising enabler for latency-sensitive related issues, its deployment produces new challenges. Besides, different service architectures and offloading strategies have a different impact on the service time performance of IoT applications. Therefore, this paper presents a novel approach for task offloading in an Edge-Cloud system in order to minimize the overall service time for latency-sensitive applications. This approach adopts fuzzy logic algorithms, considering application characteristics (e.g., CPU demand, network demand and delay sensitivity) as well as resource utilization and resource heterogeneity. A number of simulation experiments are conducted to evaluate the proposed approach with other related approaches, where it was found to improve the overall service time for latency-sensitive applications and utilize the edge-cloud resources effectively. Also, the results show that different offloading decisions within the Edge-Cloud system can lead to various service time due to the computational resources and communications types.


Author(s):  
Mark O Sullivan ◽  
Carl T Woods ◽  
James Vaughan ◽  
Keith Davids

As it is appreciated that learning is a non-linear process – implying that coaching methodologies in sport should be accommodative – it is reasonable to suggest that player development pathways should also account for this non-linearity. A constraints-led approach (CLA), predicated on the theory of ecological dynamics, has been suggested as a viable framework for capturing the non-linearity of learning, development and performance in sport. The CLA articulates how skills emerge through the interaction of different constraints (task-environment-performer). However, despite its well-established theoretical roots, there are challenges to implementing it in practice. Accordingly, to help practitioners navigate such challenges, this paper proposes a user-friendly framework that demonstrates the benefits of a CLA. Specifically, to conceptualize the non-linear and individualized nature of learning, and how it can inform player development, we apply Adolph’s notion of learning IN development to explain the fundamental ideas of a CLA. We then exemplify a learning IN development framework, based on a CLA, brought to life in a high-level youth football organization. We contend that this framework can provide a novel approach for presenting the key ideas of a CLA and its powerful pedagogic concepts to practitioners at all levels, informing coach education programs, player development frameworks and learning environment designs in sport.


2021 ◽  
Vol 15 (5) ◽  
pp. 1-32
Author(s):  
Quang-huy Duong ◽  
Heri Ramampiaro ◽  
Kjetil Nørvåg ◽  
Thu-lan Dam

Dense subregion (subgraph & subtensor) detection is a well-studied area, with a wide range of applications, and numerous efficient approaches and algorithms have been proposed. Approximation approaches are commonly used for detecting dense subregions due to the complexity of the exact methods. Existing algorithms are generally efficient for dense subtensor and subgraph detection, and can perform well in many applications. However, most of the existing works utilize the state-or-the-art greedy 2-approximation algorithm to capably provide solutions with a loose theoretical density guarantee. The main drawback of most of these algorithms is that they can estimate only one subtensor, or subgraph, at a time, with a low guarantee on its density. While some methods can, on the other hand, estimate multiple subtensors, they can give a guarantee on the density with respect to the input tensor for the first estimated subsensor only. We address these drawbacks by providing both theoretical and practical solution for estimating multiple dense subtensors in tensor data and giving a higher lower bound of the density. In particular, we guarantee and prove a higher bound of the lower-bound density of the estimated subgraph and subtensors. We also propose a novel approach to show that there are multiple dense subtensors with a guarantee on its density that is greater than the lower bound used in the state-of-the-art algorithms. We evaluate our approach with extensive experiments on several real-world datasets, which demonstrates its efficiency and feasibility.


Sign in / Sign up

Export Citation Format

Share Document