process database
Recently Published Documents


TOTAL DOCUMENTS

44
(FIVE YEARS 7)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Maximilien Chaumon ◽  
Pier-Alexandre Rioux ◽  
Sophie Herbst ◽  
Ignacio Spiousas ◽  
Sebastian Kübel ◽  
...  

Abstract The Covid-19 pandemic and associated lockdowns triggered worldwide changes in the daily routines of human experience. The Blursday database provides measures of subjective time and related processes from more than 2,800 participants (over 9 countries) tested on 14 questionnaires and 15 behavioral tasks during the Covid-19 pandemic. The easy-to-process database and all data collection tools are made fully accessible to researchers interested in studying the effects of social isolation on temporal information processing, time perspective, decision-making, sleep, metacognition, attention, memory, self-perception, and mindfulness. Blursday also includes vital quantitative statistics such as sleep patterns, personality traits, psychological well-being, and lockdown indices. Herein, we exemplify the use of the database with novel quantitative insights on the effects of lockdown (stringency, mobility) and subjective confinement on time perception (duration, passage of time, temporal distances). We show that new discoveries are possible as illustrated by an inter-individual central tendency effect in retrospective duration estimation.


2021 ◽  
Vol 27 (3) ◽  
pp. 133-142

The main purpose of this article is to outline the challenges/barriers that hinder the successful implementation of innovative teaching methods and possible ways to overcome them. The main source of information are the results of a survey conducted among teachers, in which they present their subjective assessments of the application of innovative methods in the learning process. The conducted survey shows that the possibilities for overcoming the challenges and difficulties are related to a number of guidelines for future actions, among which the highest support from the respondents has the provision of access to: appropriate training to improve the skills of teachers to apply innovative methods in the learning process; database with appropriate electronic resources for application of innovative teaching methods in the learning process.


Author(s):  
Sendong Ren ◽  
Yunwu Ma ◽  
Ninshu Ma ◽  
Qian Chen ◽  
Haiyuan Wu

Abstract In the present research, a digital twin of coaxial one-side resistance spot welding (COS-RSW) was established for the real-time prediction of transient temperature field. A 3D model of COS-RSW joint was developed based on the in-house finite element (FE) code JWRIAN-SPOT. The experimental verified FE model was employed to generate the big data of temperature of COS-RSW process. Multiple dimension interpolation was applied to process database and output prediction. The FE model can predict the thermal cycle on COS-RSW joints under different parameter couples. The interpolation effect of individual welding parameters was discussed and a power weight judgement for welding time was essential to ensure accuracy. With the support of big data, the digital twin can provide visualization prediction of COS-RSW within 10 seconds, whereas numerical modelling needs at least 1 hour. The proposed application of digital twin has potential to improve the efficiency of process optimization in engineering.


2021 ◽  
Vol 50 (1) ◽  
pp. 14-14
Author(s):  
Alan D. Fekete

Many computing researchers and practitioners may be surprised to find a "research highlight" which innovates on the way to process database transactions. Work in the early 1970s, by Turing winner Jim Gray and others, established a standard set of techniques for transaction management. These remain the basis of most commercial and open-source platforms [1], and they are still taught in university database classes. So why is important research still needed in this topic? The technology environment keeps evolving, and new performance characteristics mean that new algorithms and system designs become appropriate. This perspective will summarise the early work, and point to how the field has continued to progress.


2021 ◽  
Vol 2 ◽  
Author(s):  
Arthur Jakobs ◽  
Simon Schulte ◽  
Stefan Pauliuk

Hybrid Life Cycle Assessment (HLCA) methods attempt to address the limitations regarding process coverage and resolution of the more traditional Process- and Input-Output Life Cycle Assessments (PLCA, IOLCA). Due to the use of different units, HLCA methods rely on commodity price information to convert the physical units used in process inventories to the monetary units commonly used in Input-Output models. However, prices for the same commodity can vary significantly between different supply chains, or even between various levels in the same supply chain. The resulting commodity price variance in turn leads to added uncertainty in the hybrid environmental footprint. In this paper we take international trading statistics from BACI/UN-COMTRADE to estimate the variance of commodity prices, and use these in an integrated HLCA model of the process database ecoinvent with the EE-MRIO database EXIOBASE. We show that geographical aggregation of PLCA processes is a significant driver in the price variance of their reference products. We analyse the effect of price variance on process carbon footprint intensities (CFIs) and find that the CFIs of hybridised processes show a median increase of 6–17% due to hybridisation, for two different double counting scenarios, and a median uncertainty of −2 to +4% due to price variance. Furthermore, we illustrate the effect of price variance on the carbon footprint uncertainty in a HLCA study of Swiss household consumption. Although the relative footprint increase due to hybridisation is small to moderate with 8–14% for two different double counting correction strategies, the uncertainty due to price variability of this contribution to the footprint is very high, with 95% confidence intervals of (−28, +90%) and (−23, +68%) relative to the median. The magnitude and high positive skewness of the uncertainty highlights the importance of taking price variance into account when performing hybrid LCA.


Author(s):  
Leyla Ayvarovna Gamidullaeva ◽  
Sergey Mikhailovich Vasin ◽  
Nadezhda Chernetsova ◽  
Elena Shkarupeta ◽  
Dina Kharicheva ◽  
...  

This chapter shows that statistics collection methods are the same for various types of websites. Often, a simple “counter” is used for both unique visitors to the site and the total number of hits to the site from unique and previously registered users. Speaking of a “digital” or “smart” economy, authors distinguish different categories (levels) of development: analysis, content of business intelligence, large data warehouses. Business analytics can be divided into a number of parts: modeling and analysis of system dynamics, expert systems and databases; knowledge and technology; geographic information (geo location); system analysis and design. The various methods and forms of information (statistical models) used to identify non-trivial patterns and propose solutions are often associated today with the concept of data mining. Intelligent data analysis involves the use of knowledge from a complex of data (databases). According to experts, data mining is one of the elements that is part of the process (database management system), which includes the analysis and cleaning of data.


2018 ◽  
Author(s):  
Thomas P. Quinn ◽  
Samuel C. Lee ◽  
Svetha Venkatesh ◽  
Thin Nguyen

AbstractAlthough neuropsychiatric disorders have a well-established genetic background, their specific molecular foundations remain elusive. This has prompted many investigators to design studies that identify explanatory biomarkers, and then use these biomarkers to predict clinical outcomes. One approach involves using machine learning algorithms to classify patients based on blood mRNA expression from high-throughput transcriptomic assays. However, these endeavours typically fail to achieve the high level of performance, stability, and generalizability required for clinical translation. Moreover, these classifiers can lack interpretability because informative genes do not necessarily have relevance to researchers. For this study, we hypothesized that annotation-based classifiers can improve classification performance, stability, generalizability, and interpretability. To this end, we evaluated the performance of four classification algorithms on six neuropsychiatric data sets using four annotation databases. Our results suggest that the Gene Ontology Biological Process database can transform gene expression into an annotation-based feature space that improves the performance and stability of blood-based classifiers for neuropsychiatric conditions. We also show how annotation features can improve the interpretability of classifiers: since annotation databases are often used to assign biological importance to genes, annotation-based classifiers are easy to interpret because the biological importance of the features are the features themselves. We found that using annotations as features improves the performance and stability of classifiers. We also noted that the top ranked annotations tend contain the top ranked genes, suggesting that the most predictive annotations are a superset of the most predictive genes. Based on this, and the fact that annotations are used routinely to assign biological importance to genetic data, we recommend transforming gene-level expression into annotation-level expression prior to the classification of neuropsychiatric conditions.


2018 ◽  
Vol 31 (6) ◽  
pp. 312 ◽  
Author(s):  
Carolina Vidal ◽  
Carina Ruano ◽  
Vera Bernardino ◽  
Pedro Lavado Carreira ◽  
Ana Lladó ◽  
...  

Introduction: Systemic sclerosis is a complex disorder that requires systematic screening. Our objective is to report the European Scleroderma Trials and Research group centre affiliation and its impact in our clinical practice.Material and Methods: The European Scleroderma Trials and Research group affiliation process, database update and current patient evaluation, with respect to demographic and clinical features. Cumulative mortality was analysed.Results: We identified 19 female patients (which met all the American College of Rheumatology/ European League Against Rheumatism 2013 criteria for systemic sclerosis) under current follow-up, divided according to the LeRoy classification into diffuse cutaneous (n = 5), limited cutaneous (n = 11) and limited (n = 3) types, followed for a median period of 5, 12 and 6 years, respectively. Raynaud´s phenomenon and abnormal nailfold capillaries were universally present. Interstitial lung disease was absent in the limited cutaneous form but present in 100% of the diffuse subtype. Pitting scars were more common in the diffuse form. Active disease was also more frequent in the diffuse form, and most patients with active disease were treated with anti-endothelin receptor antagonists. Over 21 years (from 1994 to 2015) the mortality rate was 55% (n = 23/42). Age at time of death was significantly lower in the diffuse subtype.Discussion: Our single centre cohort shares many features with larger and international reports and more specifically is in accordance with patient characteristics described in the European Scleroderma Trials and Research group registries.Conclusion: The European Scleroderma Trials and Research group registration motivated our systematic patient characterization and may be used as a tool for homogenous disease registries.


2016 ◽  
Vol 11 (01) ◽  
Author(s):  
Neeshu Chaudhary

This paper focuses on the problem of efficient and fast retrieval of images from a large database using sketch as query image. Basically searching is based on a descriptor that addresses the asymmetry between binary sketch from the user side and full color image of the database. The working of proposed algorithm is such that query image and full color database images undergo same feature extraction process. Database images will be clustered offline which reduces time complexity on runtime. Further indexing is done which will be used to describe, store and organize image information and to assist people in finding image resources conveniently and quickly. Firstly feature vector extraction is done using contours and then edges will be detected in different orientation using modulus maxima edge detection in contourlet domain. This approach is almost better than existing approaches in many aspects such as compactness of feature vector, simplicity of implementation, retrieval performance and efficient feature extraction less time complexity.


Sign in / Sign up

Export Citation Format

Share Document