scholarly journals Physicians’ Concept of Time Usage – A Key Concern in EPR Deployment

Author(s):  
Rebecka Janols ◽  
Bengt Göransson ◽  
Erik Borälv ◽  
Bengt Sandblad
Keyword(s):  
2021 ◽  
Vol 16 (1) ◽  
Author(s):  
Jens Zentgraf ◽  
Sven Rahmann

Abstract Motivation With an increasing number of patient-derived xenograft (PDX) models being created and subsequently sequenced to study tumor heterogeneity and to guide therapy decisions, there is a similarly increasing need for methods to separate reads originating from the graft (human) tumor and reads originating from the host species’ (mouse) surrounding tissue. Two kinds of methods are in use: On the one hand, alignment-based tools require that reads are mapped and aligned (by an external mapper/aligner) to the host and graft genomes separately first; the tool itself then processes the resulting alignments and quality metrics (typically BAM files) to assign each read or read pair. On the other hand, alignment-free tools work directly on the raw read data (typically FASTQ files). Recent studies compare different approaches and tools, with varying results. Results We show that alignment-free methods for xenograft sorting are superior concerning CPU time usage and equivalent in accuracy. We improve upon the state of the art sorting by presenting a fast lightweight approach based on three-way bucketed quotiented Cuckoo hashing. Our hash table requires memory comparable to an FM index typically used for read alignment and less than other alignment-free approaches. It allows extremely fast lookups and uses less CPU time than other alignment-free methods and alignment-based methods at similar accuracy. Several engineering steps (e.g., shortcuts for unsuccessful lookups, software prefetching) improve the performance even further. Availability Our software xengsort is available under the MIT license at http://gitlab.com/genomeinformatics/xengsort. It is written in numba-compiled Python and comes with sample Snakemake workflows for hash table construction and dataset processing.


2013 ◽  
Vol 718-720 ◽  
pp. 792-796
Author(s):  
Ming Fu Zhao ◽  
Zheng Wei Zhang ◽  
Nian Wang

As we known frying oil belongs to waste oils when it has been excessive used, long time usage also cause serious effect. This paper chooses dragon fish oil which was fried 10 times excessively. We can extract the characteristic in absorption peak (323.391.443nm) of spectral absorption value as the dependent variable. Then build the interval partial least square model, Through the MATLAB, we can extract the optimum interval is 5 and the best factor of wavelength range is 7. Prediction of correlation coefficient for R is 0.998. By the cross validation verification Q22=-0.3461<0.0975, we can get the establishment of PLS equation as Y1, Y2, Y3. The model which we build can predict the content situation of characteristic absorption peak in frying oil effectively.


i-com ◽  
2008 ◽  
Vol 6 (3/2007) ◽  
pp. 23-29 ◽  
Author(s):  
Birgit Bomsdorf

SummaryTask modelling has entered the development process of web applications, strengthening the usage-centred view within the early steps in Web-Engineering (WE). In current approaches, however, this view is not kept up during subsequent activities to the same degree as this is the case in the field of Human-Computer-Interaction (HCI). The modelling approach presented in this contribution combines models as known from WE with models used in HCI to change this situation. Basically the WE-HCI-integration is supported by combining task and object models as known from HCI with conceptual modelling known from WE. In this paper, the main focus is on the WebTaskModel, a task model adapted to web application concerns, and its contribution towards a task-related web user interface. The main difference to existing task models is the build-time and run-time usage of a generic task lifecycle. Hereby the description of exceptions and erroneous situations during task performance (caused by, e.g., the stateless protocol or Browser interaction) is enabled and at the same time clearly separated from the flow of correct action.


Salmand ◽  
2016 ◽  
Vol 11 (3) ◽  
pp. 400-415 ◽  
Author(s):  
Maryam Sharifian Sani ◽  
Nasibeh Zanjari ◽  
Rasoul Sadeghi

Author(s):  
Tawfeeq Nazir

Libraries are spending large proportion of their budget on the subscription of information resources (print and electronic resources). Since the early 2000's increasing percentage of library budget has been shifted to the purchase of e-resources. The usage data of e-resources provided by the publishers and aggregators to libraries, proved to be helpful for libraries and decision makers in selecting best possible resources for their users. In yesteryear's many e-metrics tools had been developed and are in continuous experiments so to develop reliable, consistent and on time usage data. The present study discusses the various e-metrics tools and their advantages and limitations.


Sign in / Sign up

Export Citation Format

Share Document