Knowledge-Based Information Resource Management System for Materials of Fast Reactors

Author(s):  
Hsuan-Tsung Sean Hsieh ◽  
Ning Li ◽  
Yitung Chen ◽  
Kenny Kwan ◽  
Jen-Yuan Huang ◽  
...  

In the development of advanced fast reactors, materials and coolant/material interactions pose a critical barrier for higher temperature and longer core life designs. For advanced burner reactors (sodium cooled), experience has shown that the qualified structural materials and fuel cladding severely limits the economic performance. In other liquid metal cooled reactor concepts, advanced materials and better understanding and control of coolant and materials interactions are necessary for realizing the potentials. Researches from universities, national laboratories and related industrial participants have been continuously generating invaluable data and knowledge about materials and their interactions with coolants in the past few decades. Under the consideration of cost and time constraints, the paradigm of designing and implementing a successful Gen IV Nuclear Energy Systems can be shifted and updated via the integration of information and internet technologies. Such efforts can be better visualized by implementing collective (centralized or distributed) data storages to serve the community with organized material data sets. Material property data provided by MatWeb.com and the ongoing development of web-based GEN IV material handbook are few examples. From system design perspective, sodium-cooled fast reactor (SFR) proposed in the GEN IV system have been significantly developed. According to the GEN IV ten-year program plan, current R&D work will be pointed to demonstration of the design and safety characteristics, and design optimization. All of those activities follow the path of data generation, analysis, knowledge discovery and finally decision making and implementation. We are proposing to create a modularized web-based information system with models to systematically catalog existing data and guide the new development and testing to acquire new data. Technically speaking, information retrieval and knowledge discovery tools will be implemented for researchers with both information lookup options from material database and technology/development gap analysis from intelligent agent and reporting components. The goal of the system is not only to provide another database, but also to create a sharable and expandable platform-free, location-free online system for research institutes and industrial partners.

2019 ◽  
Author(s):  
Lubos Molcan

AbstractPhysiological processes oscillate in time. Circadian oscillations, over approximately 24-h, are very important and among the most studied. To evaluate the presence and significance of 24-h oscillations, physiological time distributed data (TDD) are often set to a cosinor model using a wide range of irregularly updated native apps. If users are familiar with MATLAB, R or other programming languages, users can adjust the parameters of the cosinor model setting based on their needs. Nowadays, many software applications are hosted on remote servers running 24/7. Server-based software applications enable quick analysis of big data sets and run on a wide range of terminal devices using standard web browsers. We created a simple web-based cosinor application, Cosinor.Online. The application code is written in PHP. TDD is handled using a MySQL database and can be copied directly from an Excel file to the webform. Analysis results contain information about setting the 24-h oscillation and a unique ID identifier. The identifier allows users to reopen data and results repeatedly over one month or remove their data from the MySQL database. Our web-based application can be used for a quick and simple inspection of 24-h oscillations of various biological and physiological TDD.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2737
Author(s):  
Leandro Ordonez-Ante ◽  
Gregory Van Seghbroeck ◽  
Tim Wauters ◽  
Bruno Volckaert ◽  
Filip De Turck

Citizen engagement is one of the key factors for smart city initiatives to remain sustainable over time. This in turn entails providing citizens and other relevant stakeholders with the latest data and tools that enable them to derive insights that add value to their day-to-day life. The massive volume of data being constantly produced in these smart city environments makes satisfying this requirement particularly challenging. This paper introduces Explora, a generic framework for serving interactive low-latency requests, typical of visual exploratory applications on spatiotemporal data, which leverages the stream processing for deriving—on ingestion time—synopsis data structures that concisely capture the spatial and temporal trends and dynamics of the sensed variables and serve as compacted data sets to provide fast (approximate) answers to visual queries on smart city data. The experimental evaluation conducted on proof-of-concept implementations of Explora, based on traditional database and distributed data processing setups, accounts for a decrease of up to 2 orders of magnitude in query latency compared to queries running on the base raw data at the expense of less than 10% query accuracy and 30% data footprint. The implementation of the framework on real smart city data along with the obtained experimental results prove the feasibility of the proposed approach.


Energies ◽  
2017 ◽  
Vol 10 (12) ◽  
pp. 2079 ◽  
Author(s):  
Walter Borreani ◽  
Alessandro Alemberti ◽  
Guglielmo Lomonaco ◽  
Fabrizio Magugliani ◽  
Paolo Saracco
Keyword(s):  
Gen Iv ◽  

Eos ◽  
2017 ◽  
Author(s):  
Zhong Liu ◽  
James Acker

Using satellite remote sensing data sets can be a daunting task. Giovanni, a Web-based tool, facilitates access, visualization, and exploration for many of NASA’s Earth science data sets.


2017 ◽  
Vol 73 (3) ◽  
pp. 279-285
Author(s):  
Charlotte M. Deane ◽  
Ian D. Wall ◽  
Darren V. S. Green ◽  
Brian D. Marsden ◽  
Anthony R. Bradley

In this work, two freely available web-based interactive computational tools that facilitate the analysis and interpretation of protein–ligand interaction data are described. Firstly,WONKA, which assists in uncovering interesting and unusual features (for example residue motions) within ensembles of protein–ligand structures and enables the facile sharing of observations between scientists. Secondly,OOMMPPAA, which incorporates protein–ligand activity data with protein–ligand structural data using three-dimensional matched molecular pairs.OOMMPPAAhighlights nuanced structure–activity relationships (SAR) and summarizes available protein–ligand activity data in the protein context. In this paper, the background that led to the development of both tools is described. Their implementation is outlined and their utility using in-house Structural Genomics Consortium (SGC) data sets and openly available data from the PDB and ChEMBL is described. Both tools are freely available to use and download at http://wonka.sgc.ox.ac.uk/WONKA/ and http://oommppaa.sgc.ox.ac.uk/OOMMPPAA/.


2020 ◽  
Author(s):  
Anna M. Sozanska ◽  
Charles Fletcher ◽  
Dóra Bihary ◽  
Shamith A. Samarajiwa

AbstractMore than three decades ago, the microarray revolution brought about high-throughput data generation capability to biology and medicine. Subsequently, the emergence of massively parallel sequencing technologies led to many big-data initiatives such as the human genome project and the encyclopedia of DNA elements (ENCODE) project. These, in combination with cheaper, faster massively parallel DNA sequencing capabilities, have democratised multi-omic (genomic, transcriptomic, translatomic and epigenomic) data generation leading to a data deluge in bio-medicine. While some of these data-sets are trapped in inaccessible silos, the vast majority of these data-sets are stored in public data resources and controlled access data repositories, enabling their wider use (or misuse). Currently, most peer reviewed publications require the deposition of the data-set associated with a study under consideration in one of these public data repositories. However, clunky and difficult to use interfaces, subpar or incomplete annotation prevent discovering, searching and filtering of these multi-omic data and hinder their re-purposing in other use cases. In addition, the proliferation of multitude of different data repositories, with partially redundant storage of similar data are yet another obstacle to their continued usefulness. Similarly, interfaces where annotation is spread across multiple web pages, use of accession identifiers with ambiguous and multiple interpretations and lack of good curation make these data-sets difficult to use. We have produced SpiderSeqR, an R package, whose main features include the integration between NCBI GEO and SRA databases, enabling an integrated unified search of SRA and GEO data-sets and associated annotations, conversion between database accessions, as well as convenient filtering of results and saving past queries for future use. All of the above features aim to promote data reuse to facilitate making new discoveries and maximising the potential of existing data-sets.Availabilityhttps://github.com/ss-lab-cancerunit/SpiderSeqR


Sign in / Sign up

Export Citation Format

Share Document