multiple processing
Recently Published Documents


TOTAL DOCUMENTS

134
(FIVE YEARS 31)

H-INDEX

19
(FIVE YEARS 3)

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Zachary C. Elmore ◽  
L. Patrick Havlik ◽  
Daniel K. Oh ◽  
Leif Anderson ◽  
George Daaboul ◽  
...  

AbstractAdeno-associated viruses (AAV) rely on helper viruses to transition from latency to lytic infection. Some AAV serotypes are secreted in a pre-lytic manner as free or extracellular vesicle (EV)-associated particles, although mechanisms underlying such are unknown. Here, we discover that the membrane-associated accessory protein (MAAP), expressed from a frameshifted open reading frame in the AAV cap gene, is a novel viral egress factor. MAAP contains a highly conserved, cationic amphipathic domain critical for AAV secretion. Wild type or recombinant AAV with a mutated MAAP start site (MAAPΔ) show markedly attenuated secretion and correspondingly, increased intracellular retention. Trans-complementation with MAAP restored secretion of multiple AAV/MAAPΔ serotypes. Further, multiple processing and analytical methods corroborate that one plausible mechanism by which MAAP promotes viral egress is through AAV/EV association. In addition to characterizing a novel viral egress factor, we highlight a prospective engineering platform to modulate secretion of AAV vectors or other EV-associated cargo.


2021 ◽  
Author(s):  
Jakob P. Pettersen ◽  
Eivind Almaas

AbstractBackgroundDifferential co-expression network analysis has become an important tool to gain understanding of biological phenotypes and diseases. The CSD algorithm is a method to generate differential co-expression networks by comparing gene co-expressions from two different conditions. Each of the gene pairs is assigned conserved (C), specific (S) and differentiated (D) scores based on the co-expression of the gene pair between the two conditions. The result of the procedure is a network where the nodes are genes and the links are the gene pairs with the highest C-, S-, and D-scores. However, the existing CSD-implementations suffer from poor computational performance, difficult user procedures and lack of documentation.ResultsWe created the R-package csdR aimed at reaching good performance together with ease of use, sufficient documentation, and with the ability to play well with other tools for data analysis. csdR was benchmarked on a realistic dataset with 20, 645 genes. After verifying that the chosen number of iterations gave sufficient robustness, we tested the performance against the two existing CSD implementations. csdR was superior in performance to one of the implementations, whereas the other did not run. Our implementation can utilize multiple processing cores. However, we were unable to achieve more than ∼ 2.7 parallel speedup with saturation reached at about 10 cores.ConclusionsThe results suggest that csdR is a useful tool for differential co-expression analysis and is able to generate robust results within a workday on datasets of realistic sizes when run on a workstation or compute server.


2021 ◽  
Author(s):  
Maxwell Adam Levinson ◽  
Justin Niestroy ◽  
Sadnan Al Manir ◽  
Karen Fairchild ◽  
Douglas E. Lake ◽  
...  

AbstractResults of computational analyses require transparent disclosure of their supporting resources, while the analyses themselves often can be very large scale and involve multiple processing steps separated in time. Evidence for the correctness of any analysis should include not only a textual description, but also a formal record of the computations which produced the result, including accessible data and software with runtime parameters, environment, and personnel involved. This article describes FAIRSCAPE, a reusable computational framework, enabling simplified access to modern scalable cloud-based components. FAIRSCAPE fully implements the FAIR data principles and extends them to provide fully FAIR Evidence, including machine-interpretable provenance of datasets, software and computations, as metadata for all computed results. The FAIRSCAPE microservices framework creates a complete Evidence Graph for every computational result, including persistent identifiers with metadata, resolvable to the software, computations, and datasets used in the computation; and stores a URI to the root of the graph in the result’s metadata. An ontology for Evidence Graphs, EVI (https://w3id.org/EVI), supports inferential reasoning over the evidence. FAIRSCAPE can run nested or disjoint workflows and preserves provenance across them. It can run Apache Spark jobs, scripts, workflows, or user-supplied containers. All objects are assigned persistent IDs, including software. All results are annotated with FAIR metadata using the evidence graph model for access, validation, reproducibility, and re-use of archived data and software.


2021 ◽  
Vol 11 (4) ◽  
pp. 7
Author(s):  
Luis Germán García ◽  
Emanuel Montoya ◽  
Sebastian Isaza ◽  
Ricardo A. Velasquez

Computing devices of all types have almost converged to using central processing units featuring multiple processing cores. In order to develop efficient software for such devices, programmers need to learn how to write parallel programs. We present an infrastructure to support parallel programming assignments for online courses. We developed an extension to the Open edX platform with a backend that handles the execution of student codes on a cluster lab. The web user interface offers instructors a wide range of configuration options for the programming assignments as well as a flexible definition of criteria for automatic grading. We have successfully integrated the software with Open edX and tested it with a real parallel programming cluster lab.


Minerals ◽  
2021 ◽  
Vol 11 (3) ◽  
pp. 288
Author(s):  
Saad Salman ◽  
Khan Muhammad ◽  
Asif Khan ◽  
Hylke J. Glass

Clustering approaches are widely used to group similar objects and facilitate problem analysis and decision-making in many fields. During short-term planning of open-pit mines, clustering aims to aggregate similar blocks based on their attributes (e.g., geochemical grades, rock types, geometallurgical parameters) while honoring various constraints: i.e., cluster shapes, size, alignment with mining direction, destination, and rock type homogeneity. This approach helps to reduce the computational cost of optimizing short-term mine plans. Previous studies have presented ways to perform clustering without honoring constraints specific to mining. This paper presents a novel block clustering heuristic capable of considering and honoring a set of mining block aggregation requirements and constraints. Constraints can relate to the clustering adjacent blocks, achieving higher destination homogeneities, controlled cluster size, consistency with mining direction, and achieving clusters with mineable shapes and rock types’ homogeneity. The proposed algorithm’s application on two different datasets demonstrates its efficiency and capability in generating reasonable block clusters while meeting different predefined aggregation requirements and constraints.


Author(s):  
Thanh-Tam NGUYEN ◽  
Son-Thai LE ◽  
Van-Thuy LE

One of the widely used prominent biometric techniques for identity authentication is Face Recognition. It plays an essential role in many areas, such as daily life, public security, finance, the military, and the smart school. The facial recognition task is identifying or verifying the identity of a person base on their face. The first step is face detection, which detects and locates human faces in images and videos. The face match process then finds an identity of the detected face. In recent years there have been many face recognition systems improving the performance based on deep learning models. Deep learning learns representations of the face based on multiple processing layers with multiple levels of feature extraction. This approach has made sufficient improvement in face recognition since 2014, launched by the breakthroughs of DeepFace and DeepID. However, finding a way to choose the best hyperparameters remains an open question. In this paper, we introduce a method for adaptive hyperparameters selection to improve recognition accuracy. The proposed method achieves improvements on three datasets.


2021 ◽  
Vol 291 ◽  
pp. 02014
Author(s):  
Nadezhda Avramchikova ◽  
Ivan Rozhnov ◽  
Tatyana Zelenskaya ◽  
Olga Maslova ◽  
Vyacheslav Avramchikov

The feature of the circular economy is the restorative and closed nature of the production cycle with the “green” nature-like technologies application which reduce greenhouse gas emissions, slow down the temperature rise on the planet and preserve the environment. The circular economy approaches correspond to the concept of goal-setting of the United Nations Organization in the mainstream of sustainable socio-economic development and are widely used in the countries of the European Union. As a part of their research, scientists have defined a circular economy as the economy which involves the total multiple processing of the resources applied and provides energy savings. In this regard, the circular economy is called “green”, i.e. preserving the natural resources of the planet and the environment on the basis of information technology. Currently, there is enough evidence that circularity has begun to permeate linear economics and that innovative products and contracts are already available in various forms.


2021 ◽  
Vol 1 (4) ◽  
pp. 298-312
Author(s):  
Dong Qiu ◽  
◽  
Tingyi Liu ◽  

<abstract> <p>The number and field of researches on the application of Multi-Indicator Comprehensive Evaluation (MICE) are increasing. It is important to reflect on the understanding of the MICE method systematically and the issues implied behind it. This paper compares the core concepts and methodological elements of the three papers that systematically study the MICE method. It is found that the views of the three papers on the core issue are consistent and mutually supportive, but there are differences in the step division and sequence of the evaluation content. In addition, this paper considers the historical status of the MICE and holds that the key to solving the quality of weight lies in the "equivalent conversion" problem in the MICE. Taking the Human Development Index as an example, this paper illustrates the absoluteness of the "equivalent conversion" relationship. In addition, there are multiple processing methods for the MICE from the spatial dimension and multiple evaluation results accordingly, therefore, the results of the MICE need to be used carefully. Finally, based on the systematic summary and reflection of the MICE method, three suggestions are given for the application of the MICE method.</p> </abstract>


Sign in / Sign up

Export Citation Format

Share Document