field database
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 14)

H-INDEX

6
(FIVE YEARS 2)

2021 ◽  
Vol 11 (24) ◽  
pp. 11641
Author(s):  
Beomju Shin ◽  
Jung-Ho Lee ◽  
Changsu Yu ◽  
Hankyeol Kyung ◽  
Taikjin Lee

Recently, long tunnels are becoming more prevalent in Korea, and exits are added at certain sections of the tunnels. Thus, a navigation system should correctly guide the user toward the exit; however, adequate guidance is not delivered because the global navigation satellite system (GNSS) signal is not received inside a tunnel. Therefore, we present an accurate position estimation system using a magnetic field for vehicles passing through a tunnel. The position can be accurately estimated using the magnetic sensor of a smartphone with an appropriate attitude estimation and magnetic sensor calibration. Position estimation was realized by attaching the smartphone on the dashboard during navigation and calibrating the sensors using position information from the GNSS and magnetic field database before entering the tunnel. This study used magnetic field sequence data to estimate vehicle positions inside a tunnel. Furthermore, subsequence dynamic time warping was applied to compare the magnetic field data stored in the buffer with the magnetic field database, and the feasibility and performance of the proposed system was reviewed through an experiment in an actual tunnel. The analysis of the position estimation results confirmed that the proposed system could appropriately deliver tunnel navigation.


2021 ◽  
pp. bjophthalmol-2021-319509
Author(s):  
Hyungjun Kim ◽  
Hae Min Park ◽  
Hyo Chan Jeong ◽  
So Yeon Moon ◽  
Hyunsoo Cho ◽  
...  

Background/aimsThis study aimed to establish a wide-field optical coherence tomography (OCT) deviation map obtained from swept-source OCT (SS-OCT) scans. Moreover, it also aimed to compare the diagnostic ability of this wide-field deviation map with that of the peripapillary and macular deviation maps currently being used for the detection of early glaucoma (EG).MethodsFour hundred eyes, including 200 healthy eyes and 200 eyes with EG were enrolled in this retrospective observational study. Patients underwent a comprehensive ocular examination, including wide-field SS-OCT (DRI-OCT Triton; Topcon, Tokyo, Japan). The individual wide-field scan was converted into a uniform template using the fovea and optic disc centres as fixed landmarks. Subsequently, the wide-field deviation map was obtained via the comparison between individual wide-field data and a normative wide-field database that had been created by combining images of healthy eyes into a uniform template in a previous study. The ability of the new wide-field deviation map to distinguish between EG and healthy eyes was assessed by comparing it with conventional deviation maps based on the area under the receiver operating characteristic curve (AUC).ResultsThe wide-field deviation map obtained using the normative wide-field database showed the highest diagnostic ability for the diagnosis of EG (AUC=0.980 and 961 for colour-coded pixels presenting <5% and <1%, respectively) among various deviation maps. Its AUC was significantly superior to that of most conventional deviation maps (p<0.05). The wide-field deviation map demonstrated early structural glaucomatous damage well over a wider area.ConclusionThe wide-field SS-OCT deviation map exhibited good performance for distinguishing between eyes with EG and healthy eyes. The visualisation of the wider damaged area on the wide-field deviation map could be useful for the diagnosis of EG in clinical settings.


Author(s):  
M. Annad ◽  
A. Lefkir ◽  
M. Mammar-kouadri ◽  
I. Bettahar

Abstract Several studies have been conducted to assess local scour formulas in order to select the most appropriate one. Confronted to the limits of the previous formulas, further studies have been performed to propose new local scour formulas. Generalizing a single scour formula, for all soil classes, seems approximate for such a complex phenomenon depending on several parameters and may eventually lead to considerable uncertainties in scour estimation. This study aims to propose several new scour formulas for different granulometric classes of the streambed by exploiting a large field database. The new scour formulas are based on multiple non-linear regression (MNLR) models. Supervised learning is used as an optimization tool to solve the hyper-parameters of each new equation by using the ‘Gradient Descent Algorithm’. The results show that the new formulas proposed in this study perform better than some other empirical formulas chosen for comparison. The results are presented as seven new formulas, as well as abacuses for the calculation of local scour by soil classes.


2020 ◽  
Vol 1 (first) ◽  
pp. 214-224

يعرف هذا الكتاب القارئ بنظرية التحديث التطورية Evolutionary Modernization Theory لرونالد إنجلهارت التي تنبثق عنها مجموعة من الفروض يقوم صاحب النظرية باختبارها مستخدماً قاعدة بيانات ميدانية فريدة من نوعها تم جمعها من مسح القيم العالمي World Values Survey ومسح القيم الأوربية European Values Survey ما بين عامي 1981 و 2014. ويُعد هذا الكتاب امتداداً للفكر الاجتماعي-السياسي والفكر الاقتصادي-التنموي الذي ظهر بعد الحرب العالمية الثانية متمثلاً في نظريات التحديث والتغير الثقافي، ويتبنى المؤلف النظرة الكونية لخريطة العالم الثقافية ويبرز أولوية المتغير الثقافي على وجه التحديد. و تم تصميم هذا الكتاب على نحو يساعد القارئ على تفهم كيف تتغير قيم الناس وأهدافهم، وكيف يؤدي ذلك إلى تغيير العالم. This book is known as Ronald Engelhart’s Evolutionary Modernization Theory, which gives rise to a set of hypotheses that the theory owner tests using a unique field database collected from the World Values ​​Survey and the European Values ​​Survey between 1981 and 2014. This book is an extension of the socio-political and economic-developmental ideas that emerged after the Second World War represented in theories of modernization and cultural change, and the author adopts the global view of the cultural map of the world and highlights the priority of the cultural variable specifically. This book is designed to help the reader understand how people’s values ​​and goals change, and how this changes the world


2020 ◽  
Vol 157 ◽  
pp. 103630 ◽  
Author(s):  
I. Karmpadakis ◽  
C. Swan ◽  
M. Christou

To meet the change in world in terms of digitalization and progress, the need and importance of education is known to everyone. The increasing awareness towards and digitization has given rise to increase in size of education field’s database. Such database contains information about students. The information includes students behavior, their family background, the facility they have, the society environment which surrounds them, their academic records etc. The increasing technology in data sciences can help utilize this huge education field database in a productive way by applying data mining on it. When the techniques of Data mining are applied on the database relating education records, then this process is called as education data mining. This process helps us understand the area and the students on whom the attention and the amendments are required. This increases the level of education system and also affects the success rate and understanding of the students in academics in positive direction. In this paper four different classification algorithms are used to predict grades of the students, by referring student’s previous academic records. Out of the four algorithms, the one which gave the most accurate prediction is considered as the final prediction. The performance accuracy of different algorithm is compared through accuracy performance percentage.


Database ◽  
2020 ◽  
Vol 2020 ◽  
Author(s):  
N Palopoli ◽  
J A Iserte ◽  
L B Chemes ◽  
C Marino-Buslje ◽  
G Parisi ◽  
...  

Abstract Modern biology produces data at a staggering rate. Yet, much of these biological data is still isolated in the text, figures, tables and supplementary materials of articles. As a result, biological information created at great expense is significantly underutilised. The protein motif biology field does not have sufficient resources to curate the corpus of motif-related literature and, to date, only a fraction of the available articles have been curated. In this study, we develop a set of tools and a web resource, ‘articles.ELM’, to rapidly identify the motif literature articles pertinent to a researcher’s interest. At the core of the resource is a manually curated set of about 8000 motif-related articles. These articles are automatically annotated with a range of relevant biological data allowing in-depth search functionality. Machine-learning article classification is used to group articles based on their similarity to manually curated motif classes in the Eukaryotic Linear Motif resource. Articles can also be manually classified within the resource. The ‘articles.ELM’ resource permits the rapid and accurate discovery of relevant motif articles thereby improving the visibility of motif literature and simplifying the recovery of valuable biological insights sequestered within scientific articles. Consequently, this web resource removes a critical bottleneck in scientific productivity for the motif biology field. Database URL: http://slim.icr.ac.uk/articles/


Various thoughts of provenance for database inquiries have been proposed and examined in the previous couple of years. In this article, we detail three primary thoughts of database provenance, portion of their applications, and investigate among them. In particular, we audit why, how, and where provenance, depict the connections among these ideas of provenance, and portray a portion of their applications in certainty calculation, see upkeep and update, troubleshooting, and explanation spread. Provenance in Databases audits explore in course of recent years on why, how, and where provenance, explains connections among these ideas of provenance, and depicts portion of their applications in certainty calculation, see upkeep and update, troubleshooting, and comment engendering. This paper is to give review distributed writing dedicated to the subject of putting away, following, and questioning provenance in connected information It might be utilized as guide for discovering further articles, in any field of study, moderately rapidly. Provenance in Databases is planned for designers and specialists who might want to acclimate themselves with the establishments, just as the numerous difficulties in the field database provenance. Specifically, ability to store, track, and inquiry provenance information is turning into crucial element present day triple stores. We present strategies stretching out local RDF store to proficiently deal with the capacity, following, and questioning of provenance in RDF information. We depict solid and justifiable detail manner in which results were gotten from the information and how specific bits information were joined to answer question. In this manner, we present systems to tailor inquiries with provenance information.


Sign in / Sign up

Export Citation Format

Share Document