Radiotherapy Physics Related Data Types and Basic Operations

Author(s):  
Pavel Dvorak
2020 ◽  
Vol 15 ◽  
Author(s):  
Deeksha Saxena ◽  
Mohammed Haris Siddiqui ◽  
Rajnish Kumar

Background: Deep learning (DL) is an Artificial neural network-driven framework with multiple levels of representation for which non-linear modules combined in such a way that the levels of representation can be enhanced from lower to a much abstract level. Though DL is used widely in almost every field, it has largely brought a breakthrough in biological sciences as it is used in disease diagnosis and clinical trials. DL can be clubbed with machine learning, but at times both are used individually as well. DL seems to be a better platform than machine learning as the former does not require an intermediate feature extraction and works well with larger datasets. DL is one of the most discussed fields among the scientists and researchers these days for diagnosing and solving various biological problems. However, deep learning models need some improvisation and experimental validations to be more productive. Objective: To review the available DL models and datasets that are used in disease diagnosis. Methods: Available DL models and their applications in disease diagnosis were reviewed discussed and tabulated. Types of datasets and some of the popular disease related data sources for DL were highlighted. Results: We have analyzed the frequently used DL methods, data types and discussed some of the recent deep learning models used for solving different biological problems. Conclusion: The review presents useful insights about DL methods, data types, selection of DL models for the disease diagnosis.


Author(s):  
Yu Wang

In this chapter we will focus on examining computer network traffic and data. A computer network combines a set of computers and physically and logically connects them together to exchange information. Network traffic acquired from a network system provides information on data communications within the network and between networks or individual computers. The most common data types are log data, such as Kerberos logs, transmission control protocol/Internet protocol (TCP/IP) logs, Central processing unit (CPU) usage data, event logs, user command data, Internet visit data, operating system audit trail data, intrusion detection and prevention service (IDS/IPS) logs, Netflow1 data, and the simple network management protocol (SNMP) reporting data. Such information is unique and valuable for network security, specifically for intrusion detection and prevention. Although we have already presented some essential challenges in collecting such data in Chapter I, we will discuss traffic data, as well as other related data, in greater detail in this chapter. Specifically, we will describe system-specific and user-specific data types in Sections System- Specific Data and User-Specific Data, respectively, and provide detailed information on publicly available data in Section Publicly Available Data.


Author(s):  
Kerina Jones ◽  
David Ford ◽  
Caroline Brooks

ABSTRACT ObjectivesWhilst the current expansion of health-related big data and data linkage research are exciting developments with great potential, they bring a major challenge. This is how to strike an appropriate balance between making the data accessible for beneficial uses, whilst respecting the rights of individuals, the duty of confidentiality and protecting the privacy of person-level data, without undue burden to research. ApproachUsing a case study approach, we describe how the UK Secure Research Platform (UKSeRP) for the Secure Anonymised Information Linkage (SAIL) databank addresses this challenge. We outline the principles, features and operating model of the SAIL UKSeRP, and how we are addressing the challenges of making health-related data safely accessible to increasing numbers of research users within a secure environment. ResultsThe SAIL UKSeRP has four basic principles to ensure that it is able to meet the needs of the growing data user community, and these are to: A) operate a remote access system that provides secure data access to approved data users; B) host an environment that provides a powerful platform for data analysis activities; (C) have a robust mechanism for the safe transfer of approved files in and out of the system; and (D) ensure that the system is efficient and scalable to accommodate a growing data user base. Subject to independent Information Governance approval and within a robust, proportionate Governance framework, the SAIL UKSeRP provides data users with a familiar Windows interface and their usual toolsets to access anonymously-linked datasets for research and evaluation. ConclusionThe SAIL UKSeRP represents a powerful analytical environment within a privacy-protecting safe haven and secure remote access system which has been designed to be scalable and adaptable to meet the needs of the rapidly growing data linkage community. Further challenges lie ahead as the landscape develops and emerging data types become more available. UKSeRP technology is available and customisable for other use cases within the UK and international jurisdictions, to operate within their respective governance frameworks.


2012 ◽  
Vol 14 (3) ◽  
pp. 815-828 ◽  
Author(s):  
Bi Yu Chen ◽  
Jianzhong Lu ◽  
Onyx W. H. Wai ◽  
Xiaoling Chen

Coastal-related data are four-dimensional in nature, varying not only in location and water depth but also in time. The heterogeneous and dynamic nature of coastal-related data makes modeling and visualization of these data a challenging task. A new object-oriented spatiotemporal data model to represent dynamic three-dimensional coastal data is proposed in this study. In the proposed model, a set of abstract data types allowing suitable spatiotemporal operations is defined to manipulate complex coastal data. In addition, a logical data model is proposed for the design of a spatiotemporal database. The proposed object-oriented and logical data models are implemented in a real-world coastal information management system in Hong Kong. An elegant visualization framework for displaying the coastal data, based on the concept of a time–depth bar, is presented in the case study.


Information ◽  
2021 ◽  
Vol 12 (2) ◽  
pp. 69
Author(s):  
Anna Bernasconi ◽  
Silvia Grandi

Responding to the recent COVID-19 outbreak, several organizations and private citizens considered the opportunity to design and publish online explanatory data visualization tools for the communication of disease data supported by a spatial dimension. They responded to the need of receiving instant information arising from the broad research community, the public health authorities, and the general public. In addition, the growing maturity of information and mapping technologies, as well as of social networks, has greatly supported the diffusion of web-based dashboards and infographics, blending geographical, graphical, and statistical representation approaches. We propose a broad conceptualization of Web visualization tools for geo-spatial information, exceptionally employed to communicate the current pandemic; to this end, we study a significant number of publicly available platforms that track, visualize, and communicate indicators related to COVID-19. Our methodology is based on (i) a preliminary systematization of actors, data types, providers, and visualization tools, and on (ii) the creation of a rich collection of relevant sites clustered according to significant parameters. Ultimately, the contribution of this work includes a critical analysis of collected evidence and an extensive modeling effort of Geo-Online Exploratory Data Visualization (Geo-OEDV) tools, synthesized in terms of an Entity-Relationship schema. The COVID-19 pandemic outbreak has offered a significant case to study how and how much modern public communication needs spatially related data and effective implementation of tools whose inspection can impact decision-making at different levels. Our resulting model will allow several stakeholders (general users, policy-makers, and researchers/analysts) to gain awareness on the assets of structured online communication and resource owners to direct future development of these important tools.


Research is rapidly increasing day by day that taken too much efforts in exploring some interesting and some related publications over the internet.as we already know that every data bases have a different architecture that varies the performance in terms of storage architecture and medium. In this research paper we analyzed of two main big data types of Semantic web that iscategorized into two types (i) in memory Native (ii) Non-native Non-memory which are disk reside and Non-native is used for services management for instance, SQL, MySQL, and another is Oracle that is just used for storing purpose. Data bases is very important model specially, when any model come into existence. For instance, when we offer for storing purpose of that data then where it should have o store and then definitely it must be access efficiently. The proposed methodology consist test case for data retrieving and query optimization method to analyze performance of databases. When we talk about access data bases from any source then we query them for accessing. LUMB (Lehigh University Benchmark) is being used for testing performance and it cannot be used for storing data. Semantic Web Data (SWD) give a capability in such a way if anybody want to access / encode related data then it can be retrieved efficiently. Our main objective of research we have compared two types of SWD Native store and Non-nativestore and then we analyzed them


Author(s):  
Jennifer R Smith ◽  
G Thomas Hayman ◽  
Shur-Jen Wang ◽  
Stanley J F Laulederkind ◽  
Matthew J Hoffman ◽  
...  

Abstract Formed in late 1999, the Rat Genome Database (RGD, https://rgd.mcw.edu) will be 20 in 2020, the Year of the Rat. Because the laboratory rat, Rattus norvegicus, has been used as a model for complex human diseases such as cardiovascular disease, diabetes, cancer, neurological disorders and arthritis, among others, for >150 years, RGD has always been disease-focused and committed to providing data and tools for researchers doing comparative genomics and translational studies. At its inception, before the sequencing of the rat genome, RGD started with only a few data types localized on genetic and radiation hybrid (RH) maps and offered only a few tools for querying and consolidating that data. Since that time, RGD has expanded to include a wealth of structured and standardized genetic, genomic, phenotypic, and disease-related data for eight species, and a suite of innovative tools for querying, analyzing and visualizing this data. This article provides an overview of recent substantial additions and improvements to RGD’s data and tools that can assist researchers in finding and utilizing the data they need, whether their goal is to develop new precision models of disease or to more fully explore emerging details within a system or across multiple systems.


2019 ◽  
Vol 28 (01) ◽  
pp. 016-026 ◽  
Author(s):  
Fei Wang ◽  
Anita Preininger

Introduction: Artificial intelligence (AI) technologies continue to attract interest from a broad range of disciplines in recent years, including health. The increase in computer hardware and software applications in medicine, as well as digitization of health-related data together fuel progress in the development and use of AI in medicine. This progress provides new opportunities and challenges, as well as directions for the future of AI in health. Objective: The goals of this survey are to review the current state of AI in health, along with opportunities, challenges, and practical implications. This review highlights recent developments over the past five years and directions for the future. Methods: Publications over the past five years reporting the use of AI in health in clinical and biomedical informatics journals, as well as computer science conferences, were selected according to Google Scholar citations. Publications were then categorized into five different classes, according to the type of data analyzed. Results: The major data types identified were multi-omics, clinical, behavioral, environmental and pharmaceutical research and development (R&D) data. The current state of AI related to each data type is described, followed by associated challenges and practical implications that have emerged over the last several years. Opportunities and future directions based on these advances are discussed. Conclusion: Technologies have enabled the development of AI-assisted approaches to healthcare. However, there remain challenges. Work is currently underway to address multi-modal data integration, balancing quantitative algorithm performance and qualitative model interpretability, protection of model security, federated learning, and model bias.


2021 ◽  
Vol 3 (1) ◽  
pp. 28-38
Author(s):  
Ali Alaimi ◽  
Malathi Govind ◽  
Mohanad Halaweh

The aim of this exploratory research is to investigate people's perception of data sensitivity and their willingness to share such data. There has been little research within the UAE that identified the public/ordinary people's perspective of what is considered sensitive data and what is not, and which data can/not be shared with others such as social media applications, e-commerce websites, and friends. To achieve the aim of this research, empirical data was collected using a survey designed to evaluate the sensitivity of five categories of data types (personal, contact, online life, financial, and secure identifiers). The research findings revealed that the respondents tended to feel relatively low sensitivity to personal data, but they tended to feel a higher degree of sensitivity to financial-related data, and they are also not willing to share it. However, some personal data items like medical history records were largely deemed as not sensitive according to participants. This paper presents and discusses new insights and research implications based on findings from the UAE context.


2015 ◽  
Vol 22 (3) ◽  
pp. 529-535 ◽  
Author(s):  
James C McClay ◽  
Peter J Park ◽  
Mark G Janczewski ◽  
Laura Heermann Langford

Abstract Background Emergency departments in the United States service over 130 million visits per year. The demands for information from these visits require interoperable data exchange standards. While multiple data exchange specifications are in use, none have undergone rigorous standards review. This paper describes the creation and balloting of the Health Level Seven (HL7) Data Elements for Emergency Department Systems (DEEDS). Methods Existing data exchange specifications were collected and organized into categories reflecting the workflow of emergency care. The concepts were then mapped to existing standards for vocabulary, data types, and the HL7 information model. The HL7 community then processed the specification through the normal balloting process addressing all comments and concerns. The resulting specification was then submitted for publication as an HL7 informational standard. Results The resulting specification contains 525 concepts related to emergency care required for operations and reporting to external agencies. An additional 200 of the most commonly ordered laboratory tests were included. Each concept was given a unique identifier and mapped to Logical Observation Identifiers, Names, and Codes (LOINC). HL7 standard data types were applied. Discussion The HL7 DEEDS specification represents the first set of common ED related data elements to undergo rigorous standards development. The availability of this standard will contribute to improved interoperability of emergency care data.


Sign in / Sign up

Export Citation Format

Share Document