Traditional vs. Machine-Learning Techniques for OSM Quality Assessment

Author(s):  
Musfira Jilani ◽  
Michela Bertolotto ◽  
Padraig Corcoran ◽  
Amerah Alghanim

Nowadays an ever-increasing number of applications require complete and up-to-date spatial data, in particular maps. However, mapping is an expensive process and the vastness and dynamics of our world usually render centralized and authoritative maps outdated and incomplete. In this context crowd-sourced maps have the potential to provide a complete, up-to-date, and free representation of our world. However, the proliferation of such maps largely remains limited due to concerns about their data quality. While most of the current data quality assessment mechanisms for such maps require referencing to authoritative maps, we argue that such referencing of a crowd-sourced spatial database is ineffective. Instead we focus on the use of machine learning techniques that we believe have the potential to not only allow the assessment but also to recommend the improvement of the quality of crowd-sourced maps without referencing to external databases. This chapter gives an overview of these approaches.

2019 ◽  
pp. 469-487
Author(s):  
Musfira Jilani ◽  
Michela Bertolotto ◽  
Padraig Corcoran ◽  
Amerah Alghanim

Nowadays an ever-increasing number of applications require complete and up-to-date spatial data, in particular maps. However, mapping is an expensive process and the vastness and dynamics of our world usually render centralized and authoritative maps outdated and incomplete. In this context crowd-sourced maps have the potential to provide a complete, up-to-date, and free representation of our world. However, the proliferation of such maps largely remains limited due to concerns about their data quality. While most of the current data quality assessment mechanisms for such maps require referencing to authoritative maps, we argue that such referencing of a crowd-sourced spatial database is ineffective. Instead we focus on the use of machine learning techniques that we believe have the potential to not only allow the assessment but also to recommend the improvement of the quality of crowd-sourced maps without referencing to external databases. This chapter gives an overview of these approaches.


Author(s):  
Feidu Akmel ◽  
Ermiyas Birihanu ◽  
Bahir Siraj

Software systems are any software product or applications that support business domains such as Manufacturing,Aviation, Health care, insurance and so on.Software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from other for this reason it is better to apply the software metrics to measure the quality of software. Attributes that we gathered from source code through software metrics can be an input for software defect predictor. Software defect are an error that are introduced by software developer and stakeholders. Finally, in this study we discovered the application of machine learning on software defect that we gathered from the previous research works.


Work ◽  
2021 ◽  
pp. 1-12
Author(s):  
Zhang Mengqi ◽  
Wang Xi ◽  
V.E. Sathishkumar ◽  
V. Sivakumar

BACKGROUND: Nowadays, the growth of smart cities is enhanced gradually, which collects a lot of information and communication technologies that are used to maximize the quality of services. Even though the intelligent city concept provides a lot of valuable services, security management is still one of the major issues due to shared threats and activities. For overcoming the above problems, smart cities’ security factors should be analyzed continuously to eliminate the unwanted activities that used to enhance the quality of the services. OBJECTIVES: To address the discussed problem, active machine learning techniques are used to predict the quality of services in the smart city manages security-related issues. In this work, a deep reinforcement learning concept is used to learn the features of smart cities; the learning concept understands the entire activities of the smart city. During this energetic city, information is gathered with the help of security robots called cobalt robots. The smart cities related to new incoming features are examined through the use of a modular neural network. RESULTS: The system successfully predicts the unwanted activity in intelligent cities by dividing the collected data into a smaller subset, which reduces the complexity and improves the overall security management process. The efficiency of the system is evaluated using experimental analysis. CONCLUSION: This exploratory study is conducted on the 200 obstacles are placed in the smart city, and the introduced DRL with MDNN approach attains maximum results on security maintains.


2021 ◽  
Vol 10 (7) ◽  
pp. 436
Author(s):  
Amerah Alghanim ◽  
Musfira Jilani ◽  
Michela Bertolotto ◽  
Gavin McArdle

Volunteered Geographic Information (VGI) is often collected by non-expert users. This raises concerns about the quality and veracity of such data. There has been much effort to understand and quantify the quality of VGI. Extrinsic measures which compare VGI to authoritative data sources such as National Mapping Agencies are common but the cost and slow update frequency of such data hinder the task. On the other hand, intrinsic measures which compare the data to heuristics or models built from the VGI data are becoming increasingly popular. Supervised machine learning techniques are particularly suitable for intrinsic measures of quality where they can infer and predict the properties of spatial data. In this article we are interested in assessing the quality of semantic information, such as the road type, associated with data in OpenStreetMap (OSM). We have developed a machine learning approach which utilises new intrinsic input features collected from the VGI dataset. Specifically, using our proposed novel approach we obtained an average classification accuracy of 84.12%. This result outperforms existing techniques on the same semantic inference task. The trustworthiness of the data used for developing and training machine learning models is important. To address this issue we have also developed a new measure for this using direct and indirect characteristics of OSM data such as its edit history along with an assessment of the users who contributed the data. An evaluation of the impact of data determined to be trustworthy within the machine learning model shows that the trusted data collected with the new approach improves the prediction accuracy of our machine learning technique. Specifically, our results demonstrate that the classification accuracy of our developed model is 87.75% when applied to a trusted dataset and 57.98% when applied to an untrusted dataset. Consequently, such results can be used to assess the quality of OSM and suggest improvements to the data set.


2018 ◽  
Vol 27 (03) ◽  
pp. 1850011 ◽  
Author(s):  
Athanasios Tagaris ◽  
Dimitrios Kollias ◽  
Andreas Stafylopatis ◽  
Georgios Tagaris ◽  
Stefanos Kollias

Neurodegenerative disorders, such as Alzheimer’s and Parkinson’s, constitute a major factor in long-term disability and are becoming more and more a serious concern in developed countries. As there are, at present, no effective therapies, early diagnosis along with avoidance of misdiagnosis seem to be critical in ensuring a good quality of life for patients. In this sense, the adoption of computer-aided-diagnosis tools can offer significant assistance to clinicians. In the present paper, we provide in the first place a comprehensive recording of medical examinations relevant to those disorders. Then, a review is conducted concerning the use of Machine Learning techniques in supporting diagnosis of neurodegenerative diseases, with reference to at times used medical datasets. Special attention has been given to the field of Deep Learning. In addition to that, we communicate the launch of a newly created dataset for Parkinson’s disease, containing epidemiological, clinical and imaging data, which will be publicly available to researchers for benchmarking purposes. To assess the potential of the new dataset, an experimental study in Parkinson’s diagnosis is carried out, based on state-of-the-art Deep Neural Network architectures and yielding very promising accuracy results.


Author(s):  
Syed Mustafa Ali ◽  
Farah Naureen ◽  
Arif Noor ◽  
Maged Kamel N. Boulos ◽  
Javariya Aamir ◽  
...  

Background Increasingly, healthcare organizations are using technology for the efficient management of data. The aim of this study was to compare the data quality of digital records with the quality of the corresponding paper-based records by using data quality assessment framework. Methodology We conducted a desk review of paper-based and digital records over the study duration from April 2016 to July 2016 at six enrolled TB clinics. We input all data fields of the patient treatment (TB01) card into a spreadsheet-based template to undertake a field-to-field comparison of the shared fields between TB01 and digital data. Findings A total of 117 TB01 cards were prepared at six enrolled sites, whereas just 50% of the records (n=59; 59 out of 117 TB01 cards) were digitized. There were 1,239 comparable data fields, out of which 65% (n=803) were correctly matched between paper based and digital records. However, 35% of the data fields (n=436) had anomalies, either in paper-based records or in digital records. 1.9 data quality issues were calculated per digital patient record, whereas it was 2.1 issues per record for paper-based record. Based on the analysis of valid data quality issues, it was found that there were more data quality issues in paper-based records (n=123) than in digital records (n=110). Conclusion There were fewer data quality issues in digital records as compared to the corresponding paper-based records. Greater use of mobile data capture and continued use of the data quality assessment framework can deliver more meaningful information for decision making.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jesús Leonardo López-Hernández ◽  
Israel González-Carrasco ◽  
José Luis López-Cuadrado ◽  
Belén Ruiz-Mezcua

Nowadays, the recognition of emotions in people with sensory disabilities still represents a challenge due to the difficulty of generalizing and modeling the set of brain signals. In recent years, the technology that has been used to study a person’s behavior and emotions based on brain signals is the brain–computer interface (BCI). Although previous works have already proposed the classification of emotions in people with sensory disabilities using machine learning techniques, a model of recognition of emotions in people with visual disabilities has not yet been evaluated. Consequently, in this work, the authors present a twofold framework focused on people with visual disabilities. Firstly, auditory stimuli have been used, and a component of acquisition and extraction of brain signals has been defined. Secondly, analysis techniques for the modeling of emotions have been developed, and machine learning models for the classification of emotions have been defined. Based on the results, the algorithm with the best performance in the validation is random forest (RF), with an accuracy of 85 and 88% in the classification for negative and positive emotions, respectively. According to the results, the framework is able to classify positive and negative emotions, but the experimentation performed also shows that the framework performance depends on the number of features in the dataset and the quality of the Electroencephalogram (EEG) signals is a determining factor.


Sign in / Sign up

Export Citation Format

Share Document