scholarly journals Proposal for minimum information guidelines to report and reproduce results of particle tracking and motion analysis

2017 ◽  
Author(s):  
Alessandro Rigano ◽  
Caterina Strambio-De-Castillia

AbstractThe proposed Minimum Information About Particle Tracking Experiments (MIAPTE) reporting guidelines described here aim to deliver a set of rules representing the minimal information required to report and support interpretation and assessment of data arising from intracellular multiple particle tracking (MPT) experiments. Examples of such experiments are those tracking viral particles as they move from the site of entry to the site of replication within an infected cell, or those following vesicular dynamics during secretion, endocytosis, or exocytosis. By promoting development of community standards, we hope that MIAPTE will contribute to making MPT data FAIR (Findable Accessible Interoperable and Reusable). Ultimately, the goal of MIAPTE is to promote and maximize data access, discovery, preservation, re-use, and repurposing through efficient annotation, and ultimately to enable reproducibility of particle tracking experiments. This document introduces MIAPTE v0.2, which updates the version that was posted to Fairsharing.org in October 2016. MIAPTE v0.2 is presented with the specific intent of soliciting comments from the particle tracking community with the purpose of extending and improving the model. The MIAPTE guidelines are intended for different categories of users: 1) Scientists with the desire to make new results available in a way that can be interpreted unequivocally by both humans and machines. For this class of users, MIAPTE provides data descriptors to define data entry terms and the analysis workflow in a unified manner. 2) Scientists wishing to evaluate, replicate and re-analyze results published by others. For this class of users MIAPTE provides descriptors that define the analysis procedures in a manner that facilitates its reproduction. 3) Developers who want to take advantage of the schema of MIAPTE to produce MIAPTE compatible tools. MIAPTE consists of a list of controlled vocabulary (CV) terms that describe elements and properties for the minimal description of particle tracking experiments, with a focus on viral and vesicular traffic within cells. As part of this submission we provide entity relationship (ER) diagrams that show the relationship between terms. Finally, we also provide documents containing the MIAPTE-compliant XML schema describing the data model used by Open Microscopy Environment inteGrated Analysis (OMEGA), our novel particle tracking data analysis and management tool, which is reported in a separate manuscript. MIAPTE is structured in two sub-sections: 1) Section 1 contains elements, attributes and data structures describing the results of particle tracking, namely: particles, links, trajectories and trajectory segments. 2) Section 2 contains elements that provide details about the algorithmic procedure utilized to produce and analyze trajectories as well as the results of trajectory analysis. In addition MIAPTE includes those OME-XML elements that are required to capture the acquisition parameters and the structure of images to be subjected to particle tracking.

1994 ◽  
Vol 33 (05) ◽  
pp. 479-487 ◽  
Author(s):  
N. C. Salgado ◽  
A. P. Azevedo ◽  
L. Lopes ◽  
V. D. Raposo ◽  
I. Almeida ◽  
...  

Abstract:Computer-based Clinical Reporting Systems (CRS) for diagnostic departments that use structured data entry have a number of functional and structural affinities suggesting that a common software architecture for CRS may be defined. Such an architecture should allow easy expandability and reusability of a CRS. We report the development methodology and the architecture of SISCOPE, a CRS originally designed for gastrointestinal endoscopy that is expandable and reusable. Its main components are a patient database, a knowledge base, a reports base, and screen and reporting engines. The knowledge base contains the description of the controlled vocabulary and all the information necessary to control the menu system, and is easily accessed and modified with a conventional text editor. The structure of the controlled vocabulary is formally presented as an entity-relationship diagram. The screen engine drives a dynamic user interface and the reporting engine automatically creates a medical report; both engines operate by following a set of rules and the information contained in the knowledge base. Clinical experience has shown this architecture to be highly flexible and to allow frequent modifications of both the vocabulary and the menu system. This structure provided increased collaboration among development teams, insulating the domain expert from the details of the database, and enabling him to modify the system as necessary and to test the changes immediately. The system has also been reused in several different domains.


2021 ◽  
Vol 251 ◽  
pp. 02045
Author(s):  
Diego Ciangottini ◽  
Tommaso Boccali ◽  
Andrea Ceccanti ◽  
Daniele Spiga ◽  
Davide Salomoni ◽  
...  

The challenges proposed by the HL-LHC era are not limited to the sheer amount of data to be processed: the capability of optimizing the analyser's experience will also bring important benefits for the LHC communities, in terms of total resource needs, user satisfaction and in the reduction of end time to publication. At the Italian National Institute for Nuclear Physics (INFN) a portable software stack for analysis has been proposed, based on cloud-native tools and capable of providing users with a fully integrated analysis environment for the CMS experiment. The main characterizing traits of the solution consist in the user-driven design and the portability to any cloud resource provider. All this is made possible via an evolution towards a “python-based” framework, that enables the usage of a set of open-source technologies largely adopted in both cloud-native and data-science environments. In addition, a “single sign on”-like experience is available thanks to the standards-based integration of INDIGO-IAM with all the tools. The integration of compute resources is done through the customization of a JupyterHUB solution, able to spawn identity-aware user instances ready to access data with no further setup actions. The integration with GPU resources is also available, designed to sustain more and more widespread ML based workflow. Seamless connections between the user UI and batch/big data processing framework (Spark, HTCondor) are possible. Eventually, the experiment data access latency is reduced thanks to the integrated deployment of a scalable set of caches, as developed in the context of ESCAPE project, and as such compatible with the future scenarios where a data-lake will be available for the research community. The outcome of the evaluation of such a solution in action is presented, showing how a real CMS analysis workflow can make use of the infrastructure to achieve its results.


2021 ◽  
Vol 39 (28_suppl) ◽  
pp. 318-318
Author(s):  
Ajeet Gajra ◽  
Dewilka Simons ◽  
Yolaine Jeune-Smith ◽  
Amy W. Valley ◽  
Bruce A. Feinberg

318 Background: EMRs are devised to improve the quality and efficiency of healthcare delivery and to reduce medical errors. Despite the widespread use of EMRs, various factors can limit their effectiveness in improving healthcare quality. General EMR use has been cited as a factor contributing to increased workload and clinician burnout in oncology and other specialties. The objective of this qualitative research study was to identify barriers perceived by medical oncologists and hematologists (mO/H) in utilizing EMR software and factors associated with levels of satisfaction. Methods: Between January and April 2021, mO/H from across the U.S. were invited to complete a web-based survey about various trends and critical issues in oncology care. Demographics about the physicians and characteristics of their practices were captured as well in the survey. Responses were aggregated and analyzed using descriptive statistics. Results: A total of 369 mO/H completed the survey: 72% practice in a community setting; 47% identified as a hospital employee; they have an average of 19 years of clinical experience and spend on average 86% of their working time in direct patient care, seeing 17 patients per day on average on clinic days. Most (99%) of mO/H surveyed use an EMR software at their practice, with Epic (45%) and OncoEMR (16%) being the most common. Regarding satisfaction, 16% and 50% reported feeling highly satisfied and satisfied, respectively, with their current EMR, and 3% and 11% reported feeling very dissatisfied or dissatisfied, respectively. Some (19%) stated that they have considered changing their EMR, and 68% are unsure how EMR licensing fees for their practice are paid. EMR pain points most commonly experienced were: time-consuming, e.g., too many steps/click (70%); interoperability, e.g., difficulty sharing information across institutions or other EMR software (45%); data entry issues, e.g., difficulty entering clinical information, scheduling patient visits and reminders, or ordering multiple labs (38%); and poor workflow support (31%). The most useful aspects/features of their EMR software reported were availability of information, e.g., preloaded protocols, chemotherapy regimens and pathways (64%); data access (64%); and multiple access points, including remote access (37%). Conclusions: Satisfaction with EMR were generally positive among the mO/H surveyed. However, there are multiple deterrents to the efficient use of current EMR systems. This information is essential in the design of next-generation EMR (an Intelligent Medical Records system) to allow for incorporation of aspects most useful to the end-users, such as pathway access, preloaded information on cancer management as well as ease of access and portability, and a user experience that minimizes clicks and reduces physician time with EMR.


1993 ◽  
Vol 8 (1) ◽  
pp. 3-13
Author(s):  
P. Pete Chong ◽  
Ye-Sho Chen ◽  
James M. Pruett

Successful information technology transfer requires effective communication and clear, concise information exchange. This paper, using the Louisiana econometric model as a case study, proposes a pictorial approach to present and manage complex factors essential to information technology transfer. The approach utilizes multi-layer entity-relationship diagrams to provide a meaningful framework for the entire forecasting process, provide clarity to ensure better model maintenance when changes in social/economic structures require reformulations, and provide a procedural and data dictionary for clear documentation. The pictorial approach is both intuitive and readable, capable of serving as a task management tool, a model implementation aid, and a system maintenance resource.


2020 ◽  
Vol 12 (15) ◽  
pp. 6207
Author(s):  
Carla Andrade Arteaga ◽  
Raúl Rodríguez-Rodríguez ◽  
Juan-José Alfaro-Saiz ◽  
María-José Verdecho

This paper presents a methodology for quantifying the impact of Total Quality Management TQM elements on organisational strategic sustainable development, integrating within it the well-known strategic management tool of Balanced Scorecard to represent the strategic part of the organisations, and the multi-criteria technique Analytic Network Process (ANP) to identify and quantify the mentioned impact. Additionally, the application of TQM generates directly some organisational improvements—or outputs—which help model a decisional ANP network constituted by all three building blocks—TQM elements, strategic objectives and outputs—and their interrelationships. The application of the methodology to an oil firm carried out by an expert group offered, from a decision-making point of view, meaningful results that were developed following three different analyses: Global analysis, which identified the global weight of each variable; Analysis of Influences, which established sound cause–effect relationships between the variables to identify the elements—TQM and outputs—that are more important to achieve the strategic objectives; and the Integrated analysis, which pointed out which TQM elements should be fostered in order to achieve the most important sustainable strategic objectives. Finally, it is suggested to apply the methodology to other types of size and sector activity organisations, as well as to use other techniques that introduce fuzzy elements.


1992 ◽  
Vol 7 (1) ◽  
pp. 63-78 ◽  
Author(s):  
Magda Stouthamer-Loeber ◽  
Welmoet van Kammen ◽  
Rolf Loeber

Studies that assess large numbers of subjects for longitudinal research, for epidemiological purposes, or for the evaluation of prevention and intervention efforts, are very costly and should be undertaken with the greatest care to ensure their success. The success of a study, apart from its scientific merit, depends largely on the ability of the researcher to plan and set up a smoothly running operation. However, the skills required for such a task are often not acquired in academic training, nor do scientific journals abound with information on the practical aspects of running a large study. This paper summarizes the experience gained in executing a longitudinal study and covers aspects of planning, hiring of staff, training and supervision of interviewers, data collection and data entry and management. The importance of the use of the computer as a management tool is stressed.


1986 ◽  
Vol 20 (2) ◽  
pp. 165-172 ◽  
Author(s):  
R. Wootton ◽  
P. A. Flecknell

A suite of computer programs (available from the Laboratory Animal Science Association) has been written to carry out much of the routine administration of a central animal house facility. Principal functions include stock control and accounting. The programs are fully portable and can be implemented on almost any microcomputer. They are protected against data-entry errors and can be used by staff who have little or no computer experience. In 10 months' use at the Comparative Biology Centre, the Newcastle University Animal House Management System has become an indispensable management tool. The system has also been successfully implemented at Bath University.


Author(s):  
Wolf-Henning Kusber ◽  
Andreas Kohlbecker ◽  
Heba Mohamad ◽  
Anton Güntsch ◽  
Walter G. Berendsohn ◽  
...  

The International Code of Nomenclature (ICN) for algae, fungi, and plants provides for nomenclatural indexing through nomenclatural repositories (Turland et al. 2018, Art. 42). Registering nomenclatural novelties and nomenclatural acts means that repositories will keep track of names (species names and names at all ranks, replacement names, names proposed for conservation or rejection, validated names) and of nomenclatural types, including lectotypes and epitypes. Thus, PhycoBank has been advocated by different players such as the International Society for Diatom Research (ISDR), the Global Biodiversity Information Facility (GBIF), and the Special Committee on Registration of Algal and Plant Names (including fossils). Aided by a grant from the German Research Foundation (DFG, JA 874/8-1), PhycoBank has been established at the BGBM Berlin as the repository for nomenclatural acts of algae. As added value, PhycoBank deals with orthographical variants in linking the published spelling of a name to the corrected one with reference to the respective article of the ICN (Turland et al. 2018, Art. 60). Almost all nomenclatural acts are the result of taxonomic issues but also have implications for the taxonomic work of specialists worldwide. The challenge for implementing a registration system like PhycoBank is to inform individual scientists as well as to feed data into data networks, to strengthen their underlying names backbone linking scientific names to occurrences. Since June 2018, PhycoBank staff are operating the registration system using a user-friendly data entry web application. This interface for data entry by volunteers has been available since March 2019. All data entered into the system undergoes a curatorial process to assure a high level of data quality. The data entry web application is complemented by a public data access portal which is available under https://www.phycobank.org (Fig. 1). PhycoBank can be searched for scientific names (including unregistered formal or informal higher rank names), for categories of types and PhycoBank identifiers. PhycoBank assigns resolvable and globally unique HTTP-based identifiers for nomenclatural acts, e.g. for the genus Iconella, https://phycobank.org/100040. Via these PhycoBank identifiers, the corresponding data and metadata can be retrieved in human- and machine-readable formats. More than ten journals have published PhycoBank identifiers so far, allowing cross-linking between their PDF and the PhycoBank system. The Pensoft journals are pioneering an automatic registration workflow modeled and specified by the PhycoBank team. Classifications are frequently subject to changes. Currently, the algal classification is under discussion because of results from phylogenetic research. PhycoBank aims to be neutral with respect to higher classification, but tracks classification information of each name that is registered into a directed graph of available higher rank names to record fragments of higher classification information and to facilitate search functionalities. All scientists, editors, and publishers involved in the publication of nomenclatural novelties are invited to contact PhycoBank ([email protected]) to influence the prototypical registration process and to improve PhycoBank’s functionality.


Sign in / Sign up

Export Citation Format

Share Document