typical data
Recently Published Documents


TOTAL DOCUMENTS

164
(FIVE YEARS 46)

H-INDEX

12
(FIVE YEARS 1)

2022 ◽  
Author(s):  
Yao Cai ◽  
Kate Grieve ◽  
Pedro Mecê

High-resolution ophthalmic imaging devices including spectral-domain and full-field optical coherence tomography (SDOCT and FFOCT) are adversely affected by the presence of continuous involuntary retinal axial motion. Here, we thoroughly quantify and characterize retinal axial motion with both high temporal resolution (200,000 A-scans/s) and high axial resolution (4.5 um), recorded over a typical data acquisition duration of 3 s with an SDOCT device over 14 subjects. We demonstrate that although breath-holding can help decrease large-and-slow drifts, it increases small-and-fast fluctuations, which is not ideal when motion compensation is desired. Finally, by simulating the action of an axial motion stabilization control loop, we show that a loop rate of 1.2 kHz is ideal to achieve 100% robust clinical in-vivo retinal imaging.


2021 ◽  
pp. 1-5
Author(s):  
Cosima Meyer

ABSTRACT This article introduces how to teach an interactive, one-semester-long statistics and programming class. The setting also can be applied to shorter and longer classes as well as introductory and advanced courses. I propose a project-based seminar that also encompasses elements of an inverted classroom. As a result of this combination, the seminar supports students’ learning progress and also creates engaging virtual classes. To demonstrate how to apply a project-based seminar setting to teaching statistics and programming classes, I use an introductory class to data wrangling and management with the statistical software program R. Students are guided through a typical data science workflow that requires data management and data wrangling and concludes with visualizing and presenting first research results during a simulated mini-conference.


2021 ◽  
Author(s):  
Muhamad Syirojudin ◽  
Eko Haryono ◽  
Suaidi Ahadi

Abstract Indonesia relies only on the limited number of repeat station networks due to the archipelago setting with the extensive sea with the clustery distributed pattern. This paper explored geostatistical modeling to overcome that typical data characteristic. The modeling used repeat station data from the 1985 to 2015 epoch. The research used ordinary kriging (OK) compared to the Spherical Cap Harmonic Analysis (SCHA) and Polynomial. The results show that the root means square error (RMSE) of each declination, inclination, and total intensity vary among epochs. OK method for declination component produces smaller average RMSE (7.67 minutes) than SCHA (9.26 minutes) and Polynomial (7.97minutes). For the inclination component, OK has an average RMSE of 9.55 minutes, smaller than SCHA (10.05) but slightly higher than Polynomial (9.36 minutes). For the total intensity component, OK produce an average RMSE of 63.58 nT, smaller than SCHA (82.24 nT) and Polynomial (68.97 nT). The finding shows that the kriging method can be a promising method to model the regional geomagnetic field, especially in the area of limited available data and clustered distributed data.


2021 ◽  
Vol 13 (8) ◽  
pp. 208
Author(s):  
Peter Kieseberg ◽  
Sebastian Schrittwieser ◽  
Edgar Weippl

The data market concept has gained a lot of momentum in recent years, fuelled by initiatives to set up such markets, e.g., on the European level. Still, the typical data market concept aims at providing a centralised platform with all of its positive and negative side effects. Internal data markets, also called local or on-premise data markets, on the other hand, are set up to allow data trade inside an institution (e.g., between divisions of a large company) or between members of a small, well-defined consortium, thus allowing the remuneration of providing data inside these structures. Still, while research on securing global data markets has garnered some attention throughout recent years, the internal data markets have been treated as being more or less similar in this respect. In this paper, we outline the major differences between global and internal data markets with respect to security and why further research is required. Furthermore, we provide a fundamental model for a secure internal data market that can be used as a starting point for the generation of concrete internal data market models. Finally, we provide an overview on the research questions we deem most pressing in order to make the internal data market concept work securely, thus allowing for more widespread adoption.


2021 ◽  
Author(s):  
Jason Fletcher ◽  
Yuchang Wu ◽  
Tianchang Li ◽  
Qiongshi Lu

Researchers often claim that sibling analysis can be used to separate causal genetic effects from the assortment of biases that contaminate most downstream genetic studies. Indeed, typical results from sibling models show large (>50%) attenuations in the associations between polygenic scores and phenotypes compared to non-sibling models, consistent with researchers' expectations about bias reduction. This paper explores these expectations by using family (quad) data and simulations that include indirect genetic effect processes and evaluates the ability of sibling models to uncover direct genetic effects. We find that sibling models, in general, fail to uncover direct genetic effects; indeed, these models have both upward and downward biases that are difficult to sign in typical data. When genetic nurture effects exist, sibling models create 'measurement error' that attenuate associations between polygenic scores and phenotypes. As the correlation between direct and indirect effect changes, this bias can increase or decrease. Our findings suggest that interpreting results from sibling analysis aimed at uncovering direct genetic effects should be treated with caution.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4774
Author(s):  
Weiwei Zhang ◽  
Deji Chen ◽  
Yang Kong

The accuracy of bearing fault diagnosis is of great significance for the reliable operation of rotating machinery. In recent years, increasing attention has been paid to intelligent fault diagnosis techniques based on deep learning. However, most of these methods are based on supervised learning with a large amount of labeled data, which is a challenge for industrial applications. To reduce the dependence on labeled data, a self-supervised joint learning (SSJL) fault diagnosis method based on three-channel vibration images is proposed. The method combines self-supervised learning with supervised learning, makes full use of unlabeled data to learn fault features, and further improves the feature recognition rate by transforming the data into three-channel vibration images. The validity of the method was verified using two typical data sets from a motor bearing. Experimental results show that this method has higher diagnostic accuracy for small quantities of labeled data and is superior to the existing methods.


2021 ◽  
Vol 28 (1) ◽  
pp. 80-93
Author(s):  
Darius Pupeikis ◽  
Lina Morkūnaitė ◽  
Mindaugas Daukšys ◽  
Arūnas Aleksandras Navickas ◽  
Svajūnas Abromas

While the AEC industry is moving towards digitalization off-site rebar prefabrication became a common practice. Now most companies use a long-established standard order processing method, where the customer submits 2D paper or PDF-based drawings. Subsequently, the manufacturers are obligated to make additional detailing, redrawing, calculations, and preparation of other required information for manufacturing. Thus, in this typical scenario, there is a great repetition of the same tasks, with the obvious loss of time and increased likelihood of human error. However, improvements can be made by the application of advanced digital production workflow and the use of open BIM standards (e.g., IFC, XML, BVBS). Therefore, this paper presents the typical data flow algorithm in contrast to the automated data flow for reinforcement manufacturing. Further, the two approaches are compared and analyzed based on Multi-Criteria Decision Making (MCDM) methods. The results have shown promising prospects for companies willing to automate their data flow processes by the use of 3D drawings and digital data from the BIM model in their plants.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Niklas Grieger ◽  
Justus T. C. Schwabedal ◽  
Stefanie Wendel ◽  
Yvonne Ritze ◽  
Stephan Bialonski

AbstractReliable automation of the labor-intensive manual task of scoring animal sleep can facilitate the analysis of long-term sleep studies. In recent years, deep-learning-based systems, which learn optimal features from the data, increased scoring accuracies for the classical sleep stages of Wake, REM, and Non-REM. Meanwhile, it has been recognized that the statistics of transitional stages such as pre-REM, found between Non-REM and REM, may hold additional insight into the physiology of sleep and are now under vivid investigation. We propose a classification system based on a simple neural network architecture that scores the classical stages as well as pre-REM sleep in mice. When restricted to the classical stages, the optimized network showed state-of-the-art classification performance with an out-of-sample F1 score of 0.95 in male C57BL/6J mice. When unrestricted, the network showed lower F1 scores on pre-REM (0.5) compared to the classical stages. The result is comparable to previous attempts to score transitional stages in other species such as transition sleep in rats or N1 sleep in humans. Nevertheless, we observed that the sequence of predictions including pre-REM typically transitioned from Non-REM to REM reflecting sleep dynamics observed by human scorers. Our findings provide further evidence for the difficulty of scoring transitional sleep stages, likely because such stages of sleep are under-represented in typical data sets or show large inter-scorer variability. We further provide our source code and an online platform to run predictions with our trained network.


Sign in / Sign up

Export Citation Format

Share Document