scholarly journals Document-Image Related Visual Sensors and Machine Learning Techniques

Sensors ◽  
2021 ◽  
Vol 21 (17) ◽  
pp. 5849
Author(s):  
Kyandoghere Kyamakya ◽  
Ahmad Haj Mosa ◽  
Fadi Al Machot ◽  
Jean Chamberlain Chedjou

Document imaging/scanning approaches are essential techniques for digitalizing documents in various real-world contexts, e.g.,  libraries, office communication, managementof workflows, and electronic archiving [...]

2020 ◽  
Vol 12 (6) ◽  
pp. 2544
Author(s):  
Alice Consilvio ◽  
José Solís-Hernández ◽  
Noemi Jiménez-Redondo ◽  
Paolo Sanetti ◽  
Federico Papa ◽  
...  

The objective of this study is to show the applicability of machine learning and simulative approaches to the development of decision support systems for railway asset management. These techniques are applied within the generic framework developed and tested within the In2Smart project. The framework is composed by different building blocks, in order to show the complete process from data collection and knowledge extraction to the real-world decisions. The application of the framework to two different real-world case studies is described: the first case study deals with strategic earthworks asset management, while the second case study considers the tactical and operational planning of track circuits’ maintenance. Although different methodologies are applied and different planning levels are considered, both the case studies follow the same general framework, demonstrating the generality of the approach. The potentiality of combining machine learning techniques with simulative approaches to replicate real processes is shown, evaluating the key performance indicators employed within the considered asset management process. Finally, the results of the validation are reported as well as the developed human–machine interfaces for output visualization.


Author(s):  
Yining Xu ◽  
Xinran Cui ◽  
Yadong Wang

Tumor metastasis is the major cause of mortality from cancer. From this perspective, detecting cancer gene expression and transcriptome changes is important for exploring tumor metastasis molecular mechanisms and cellular events. Precisely estimating a patient’s cancer state and prognosis is the key challenge to develop a patient’s therapeutic schedule. In the recent years, a variety of machine learning techniques widely contributed to analyzing real-world gene expression data and predicting tumor outcomes. In this area, data mining and machine learning techniques have widely contributed to gene expression data analysis by supplying computational models to support decision-making on real-world data. Nevertheless, limitation of real-world data extremely restricted model predictive performance, and the complexity of data makes it difficult to extract vital features. Besides these, the efficacy of standard machine learning pipelines is far from being satisfactory despite the fact that diverse feature selection strategy had been applied. To address these problems, we developed directed relation-graph convolutional network to provide an advanced feature extraction strategy. We first constructed gene regulation network and extracted gene expression features based on relational graph convolutional network method. The high-dimensional features of each sample were regarded as an image pixel, and convolutional neural network was implemented to predict the risk of metastasis for each patient. Ten cross-validations on 1,779 cases from The Cancer Genome Atlas show that our model’s performance (area under the curve, AUC = 0.837; area under precision recall curve, AUPRC = 0.717) outstands that of an existing network-based method (AUC = 0.707, AUPRC = 0.555).


2021 ◽  
Author(s):  
Andrew M V Dadario ◽  
Christian Espinoza ◽  
Wellington Araujo Nogueira

Objective Anticipating fetal risk is a major factor in reducing child and maternal mortality and suffering. In this context cardiotocography (CTG) is a low cost, well established procedure that has been around for decades, despite lacking consensus regarding its impact on outcomes. Machine learning emerged as an option for automatic classification of CTG records, as previous studies showed expert level results, but often came at the price of reduced generalization potential. With that in mind, the present study sought to improve statistical rigor of evaluation towards real world application. Materials and Methods In this study, a dataset of 2126 CTG recordings labeled as normal, suspect or pathological by the consensus of three expert obstetricians was used to create a baseline random forest model. This was followed by creating a lightgbm model tuned using gaussian process regression and post processed using cross validation ensembling. Performance was assessed using the area under the precision-recall curve (AUPRC) metric over 100 experiment executions, each using a testing set comprised of 30% of data stratified by the class label. Results The best model was a cross validation ensemble of lightgbm models that yielded 95.82% AUPRC. Conclusions The model is shown to produce consistent expert level performance at a less than negligible cost. At an estimated 0.78 USD per million predictions the model can generate value in settings with CTG qualified personnel and all the more in their absence.


2012 ◽  
Vol 2 (1) ◽  
pp. 65-80 ◽  
Author(s):  
Takeshi Okadome ◽  
Hajime Funai ◽  
Sho Ito ◽  
Junya Nakajima ◽  
Koh Kakusho

The method proposed in this paper searches for web pages using an event-related query consisting of a noun, verb, and genre term. It re-ranks web pages retrieved using a standard search engine on the basis of scores calculated from an expression consisting of weighted factors such as the frequency of query words. For the genres that are characterized by their genre terms, the method optimizes the weights of the expression. Furthermore, the method attempts to improve the scores provided of relevant pages by using machine learning techniques. In addition, some evaluations are provided to show the effectiveness of the method.


Philosophies ◽  
2019 ◽  
Vol 4 (2) ◽  
pp. 27
Author(s):  
Jean-Louis Dessalles

Deep learning and other similar machine learning techniques have a huge advantage over other AI methods: they do function when applied to real-world data, ideally from scratch, without human intervention. However, they have several shortcomings that mere quantitative progress is unlikely to overcome. The paper analyses these shortcomings as resulting from the type of compression achieved by these techniques, which is limited to statistical compression. Two directions for qualitative improvement, inspired by comparison with cognitive processes, are proposed here, in the form of two mechanisms: complexity drop and contrast. These mechanisms are supposed to operate dynamically and not through pre-processing as in neural networks. Their introduction may bring the functioning of AI away from mere reflex and closer to reflection.


Author(s):  
Kushagra Singh Sisodiya

Abstract: The Industrial Revolution 4.0 has flooded the virtual world with data, which includes Internet of Things (IoT) data, mobile data, cybersecurity data, business data, social networks, including health data. To analyse this data efficiently and create related efficient and streamlined applications, expertise in artificial intelligence specifically machine learning (ML), is required. This field makes use of a variety of machine learning methods, including supervised, unsupervised, semi-supervised, and reinforcement. Additionally, deep learning, which is a subset of a larger range of machine learning techniques, is capable of effectively analysing vast amounts of data. Machine learning is a broad term that encompasses a number of methods used to extract information from data. These methods may allow the rapid translation of massive real-world information into applications that assist patients and providers in making decisions. The objective of this literature review was to find observational studies that utilised machine learning to enhance patient-provider decision-making utilising secondary data. Keywords: Machine Learning, Real World, Patient, Population, Artificial Intelligence


Author(s):  
Senka Krivic ◽  
Michael Cashmore ◽  
Daniele Magazzeni ◽  
Bram Ridder ◽  
Sandor Szedmak ◽  
...  

In real world environments the state is almost never completely known. Exploration is often expensive. The application of planning in these environments is consequently more difficult and less robust. In this paper we present an approach for predicting new information about a partially-known state. The state is translated into a partially-known multigraph, which can then be extended using machine-learning techniques. We demonstrate the effectiveness of our approach, showing that it enhances the scalability of our planners, and leads to less time spent on sensing actions.


IEEE Expert ◽  
1995 ◽  
Vol 10 (2) ◽  
pp. 37-45 ◽  
Author(s):  
M. Kaiser ◽  
V. Klingspor ◽  
J. del R. Millan ◽  
M. Accame ◽  
F. Wallner ◽  
...  

2019 ◽  
Vol 37 (6) ◽  
pp. 952-969
Author(s):  
Ahsan Mahmood ◽  
Hikmat Ullah Khan

Purpose The purpose of this paper is to apply state-of-the-art machine learning techniques for assessing the quality of the restaurants using restaurant inspection data. The machine learning techniques are applied to solve the real-world problems in all sphere of life. Health and food departments pay regular visits to restaurants for inspection and mark the condition of the restaurant on the basis of the inspection. These inspections consider many factors that determine the condition of the restaurants and make it possible for the authorities to classify the restaurants. Design/methodology/approach In this paper, standard machine learning techniques, support vector machines, naïve Bayes and random forest classifiers are applied to classify the critical level of the restaurants on the basis of features identified during the inspection. The importance of different factors of inspection is determined by using feature selection through the help of the minimum-redundancy-maximum-relevance and linear vector quantization feature importance methods. Findings The experiments are accomplished on the real-world New York City restaurant inspection data set that contains diverse inspection features. The results show that the nonlinear support vector machine achieves better accuracy than other techniques. Moreover, this research study investigates the importance of different factors of restaurant inspection and finds that inspection score and grade are significant features. The performance of the classifiers is measured by using the standard performance evaluation measures of accuracy, sensitivity and specificity. Originality/value This research uses a real-world data set of restaurant inspection that has, to the best of the authors’ knowledge, never been used previously by researchers. The findings are helpful in identifying the best restaurants and help finding the factors that are considered important in restaurant inspection. The results are also important in identifying possible biases in restaurant inspections by the authorities.


Sign in / Sign up

Export Citation Format

Share Document