scholarly journals Sensors of Smart Devices in the Internet of Everything (IoE) Era: Big Opportunities and Massive Doubts

2019 ◽  
Vol 2019 ◽  
pp. 1-26 ◽  
Author(s):  
Mohammad Masoud ◽  
Yousef Jaradat ◽  
Ahmad Manasrah ◽  
Ismael Jannoud

Smart device industry allows developers and designers to embed different sensors, processors, and memories in small-size electronic devices. Sensors are added to enhance the usability of these devices and improve the quality of experience through data collection and analysis. However, with the era of big data and machine learning, sensors’ data may be processed by different techniques to infer various hidden information. The extracted information may be beneficial to device users, developers, and designers to enhance the management, operation, and development of these devices. However, the extracted information may be used to compromise the security and the privacy of humans in the era of Internet of Everything (IoE). In this work, we attempt to review the process of inferring meaningful data from smart devices’ sensors, especially, smartphones. In addition, different useful machine learning applications based on smartphones’ sensors data are shown. Moreover, different side channel attacks utilizing the same sensors and the same machine learning algorithms are overviewed.

2021 ◽  
Vol 3 (1) ◽  
Author(s):  
Zhikuan Zhao ◽  
Jack K. Fitzsimons ◽  
Patrick Rebentrost ◽  
Vedran Dunjko ◽  
Joseph F. Fitzsimons

AbstractMachine learning has recently emerged as a fruitful area for finding potential quantum computational advantage. Many of the quantum-enhanced machine learning algorithms critically hinge upon the ability to efficiently produce states proportional to high-dimensional data points stored in a quantum accessible memory. Even given query access to exponentially many entries stored in a database, the construction of which is considered a one-off overhead, it has been argued that the cost of preparing such amplitude-encoded states may offset any exponential quantum advantage. Here we prove using smoothed analysis that if the data analysis algorithm is robust against small entry-wise input perturbation, state preparation can always be achieved with constant queries. This criterion is typically satisfied in realistic machine learning applications, where input data is subjective to moderate noise. Our results are equally applicable to the recent seminal progress in quantum-inspired algorithms, where specially constructed databases suffice for polylogarithmic classical algorithm in low-rank cases. The consequence of our finding is that for the purpose of practical machine learning, polylogarithmic processing time is possible under a general and flexible input model with quantum algorithms or quantum-inspired classical algorithms in the low-rank cases.


2021 ◽  
Vol 28 (1) ◽  
pp. e100251
Author(s):  
Ian Scott ◽  
Stacey Carter ◽  
Enrico Coiera

Machine learning algorithms are being used to screen and diagnose disease, prognosticate and predict therapeutic responses. Hundreds of new algorithms are being developed, but whether they improve clinical decision making and patient outcomes remains uncertain. If clinicians are to use algorithms, they need to be reassured that key issues relating to their validity, utility, feasibility, safety and ethical use have been addressed. We propose a checklist of 10 questions that clinicians can ask of those advocating for the use of a particular algorithm, but which do not expect clinicians, as non-experts, to demonstrate mastery over what can be highly complex statistical and computational concepts. The questions are: (1) What is the purpose and context of the algorithm? (2) How good were the data used to train the algorithm? (3) Were there sufficient data to train the algorithm? (4) How well does the algorithm perform? (5) Is the algorithm transferable to new clinical settings? (6) Are the outputs of the algorithm clinically intelligible? (7) How will this algorithm fit into and complement current workflows? (8) Has use of the algorithm been shown to improve patient care and outcomes? (9) Could the algorithm cause patient harm? and (10) Does use of the algorithm raise ethical, legal or social concerns? We provide examples where an algorithm may raise concerns and apply the checklist to a recent review of diagnostic imaging applications. This checklist aims to assist clinicians in assessing algorithm readiness for routine care and identify situations where further refinement and evaluation is required prior to large-scale use.


2021 ◽  
pp. 1-16
Author(s):  
Kevin Kloos

The use of machine learning algorithms at national statistical institutes has increased significantly over the past few years. Applications range from new imputation schemes to new statistical output based entirely on machine learning. The results are promising, but recent studies have shown that the use of machine learning in official statistics always introduces a bias, known as misclassification bias. Misclassification bias does not occur in traditional applications of machine learning and therefore it has received little attention in the academic literature. In earlier work, we have collected existing methods that are able to correct misclassification bias. We have compared their statistical properties, including bias, variance and mean squared error. In this paper, we present a new generic method to correct misclassification bias for time series and we derive its statistical properties. Moreover, we show numerically that it has a lower mean squared error than the existing alternatives in a wide variety of settings. We believe that our new method may improve machine learning applications in official statistics and we aspire that our work will stimulate further methodological research in this area.


2021 ◽  
Vol 22 (Supplement_1) ◽  
Author(s):  
M Omer ◽  
A Amir-Khalili ◽  
A Sojoudi ◽  
T Thao Le ◽  
S A Cook ◽  
...  

Abstract Funding Acknowledgements Type of funding sources: Public grant(s) – National budget only. Main funding source(s): SmartHeart EPSRC programme grant (www.nihr.ac.uk), London Medical Imaging and AI Centre for Value-Based Healthcare Background Quality measures for machine learning algorithms include clinical measures such as end-diastolic (ED) and end-systolic (ES) volume, volumetric overlaps such as Dice similarity coefficient and surface distances such as Hausdorff distance. These measures capture differences between manually drawn and automated contours but fail to capture the trust of a clinician to an automatically generated contour. Purpose We propose to directly capture clinicians’ trust in a systematic way. We display manual and automated contours sequentially in random order and ask the clinicians to score the contour quality. We then perform statistical analysis for both sources of contours and stratify results based on contour type. Data The data selected for this experiment came from the National Health Center Singapore. It constitutes CMR scans from 313 patients with diverse pathologies including: healthy, dilated cardiomyopathy (DCM), hypertension (HTN), hypertrophic cardiomyopathy (HCM), ischemic heart disease (IHD), left ventricular non-compaction (LVNC), and myocarditis. Each study contains a short axis (SAX) stack, with ED and ES phases manually annotated. Automated contours are generated for each SAX image for which manual annotation is available. For this, a machine learning algorithm trained at Circle Cardiovascular Imaging Inc. is applied and the resulting predictions are saved to be displayed in the contour quality scoring (CQS) application. Methods: The CQS application displays manual and automated contours in a random order and presents the user an option to assign a contour quality score 1: Unacceptable, 2: Bad, 3: Fair, 4: Good. The UK Biobank standard operating procedure is used for assessing the quality of the contoured images. Quality scores are assigned based on how the contour affects clinical outcomes. However, as images are presented independent of spatiotemporal context, contour quality is assessed based on how well the area of the delineated structure is approximated. Consequently, small contours and small deviations are rarely assigned a quality score of less than 2, as they are not clinically relevant. Special attention is given to the RV-endo contours as often, mostly in basal images, two separate contours appear. In such cases, a score of 3 is given if the two disjoint contours sufficiently encompass the underlying anatomy; otherwise they are scored as 2 or 1. Results A total of 50991 quality scores (24208 manual and 26783 automated) are generated by five expert raters. The mean score for all manual and automated contours are 3.77 ± 0.48 and 3.77 ± 0.52, respectively. The breakdown of mean quality scores by contour type is included in Fig. 1a while the distribution of quality scores for various raters are shown in Fig. 1b. Conclusion We proposed a method of comparing the quality of manual versus automated contouring methods. Results suggest similar statistics in quality scores for both sources of contours. Abstract Figure 1


2021 ◽  
pp. 108529
Author(s):  
Miia Lillstrang ◽  
Markus Harju ◽  
Guillermo del Campo ◽  
Gonzalo Calderon ◽  
Juha Röning ◽  
...  

2021 ◽  
Vol 218 ◽  
pp. 44-51
Author(s):  
D. Venkata Vara Prasad ◽  
Lokeswari Y. Venkataramana ◽  
P. Senthil Kumar ◽  
G. Prasannamedha ◽  
K. Soumya ◽  
...  

Author(s):  
Syed Jamal Safdar Gardezi ◽  
Mohamed Meselhy Eltoukhy ◽  
Ibrahima Faye

Breast cancer is one of the leading causes of death in women worldwide. Early detection is the key to reduce the mortality rates. Mammography screening has proven to be one of the effective tools for diagnosis of breast cancer. Computer aided diagnosis (CAD) system is a fast, reliable, and cost-effective tool in assisting the radiologists/physicians for diagnosis of breast cancer. CAD systems play an increasingly important role in the clinics by providing a second opinion. Clinical trials have shown that CAD systems have improved the accuracy of breast cancer detection. A typical CAD system involves three major steps i.e. segmentation of suspected lesions, feature extraction and classification of these regions into normal or abnormal class and further into benign or malignant stages. The diagnostics ability of any CAD system is dependent on accurate segmentation, feature extraction techniques and most importantly classification tools that have ability to discriminate the normal tissues from the abnormal tissues. In this chapter we discuss the application of machine learning algorithms e.g. ANN, binary tree, SVM, etc. together with segmentation and feature extraction techniques in a CAD system development. Various methods used in the detection and diagnosis of breast lesions in mammography are reviewed. A brief introduction of machine learning tools, used in diagnosis and their classification performance on various segmentation and feature extraction techniques is presented.


Energies ◽  
2020 ◽  
Vol 13 (17) ◽  
pp. 4368 ◽  
Author(s):  
Chun-Wei Chen ◽  
Chun-Chang Li ◽  
Chen-Yu Lin

Energy baseline is an important method for measuring the energy-saving benefits of chiller system, and the benefits can be calculated by comparing prediction models and actual results. Currently, machine learning is often adopted as a prediction model for energy baselines. Common models include regression, ensemble learning, and deep learning models. In this study, we first reviewed several machine learning algorithms, which were used to establish prediction models. Then, the concept of clustering to preprocess chiller data was adopted. Data mining, K-means clustering, and gap statistic were used to successfully identify the critical variables to cluster chiller modes. Applying these key variables effectively enhanced the quality of the chiller data, and combining the clustering results and the machine learning model effectively improved the prediction accuracy of the model and the reliability of the energy baselines.


Cancers ◽  
2020 ◽  
Vol 12 (12) ◽  
pp. 3817
Author(s):  
Shi-Jer Lou ◽  
Ming-Feng Hou ◽  
Hong-Tai Chang ◽  
Chong-Chi Chiu ◽  
Hao-Hsien Lee ◽  
...  

No studies have discussed machine learning algorithms to predict recurrence within 10 years after breast cancer surgery. This study purposed to compare the accuracy of forecasting models to predict recurrence within 10 years after breast cancer surgery and to identify significant predictors of recurrence. Registry data for breast cancer surgery patients were allocated to a training dataset (n = 798) for model development, a testing dataset (n = 171) for internal validation, and a validating dataset (n = 171) for external validation. Global sensitivity analysis was then performed to evaluate the significance of the selected predictors. Demographic characteristics, clinical characteristics, quality of care, and preoperative quality of life were significantly associated with recurrence within 10 years after breast cancer surgery (p < 0.05). Artificial neural networks had the highest prediction performance indices. Additionally, the surgeon volume was the best predictor of recurrence within 10 years after breast cancer surgery, followed by hospital volume and tumor stage. Accurate recurrence within 10 years prediction by machine learning algorithms may improve precision in managing patients after breast cancer surgery and improve understanding of risk factors for recurrence within 10 years after breast cancer surgery.


Author(s):  
Anitha Elavarasi S. ◽  
Jayanthi J.

Machine learning provides the system to automatically learn without human intervention and improve their performance with the help of previous experience. It can access the data and use it for learning by itself. Even though many algorithms are developed to solve machine learning issues, it is difficult to handle all kinds of inputs data in-order to arrive at accurate decisions. The domain knowledge of statistical science, probability, logic, mathematical optimization, reinforcement learning, and control theory plays a major role in developing machine learning based algorithms. The key consideration in selecting a suitable programming language for implementing machine learning algorithm includes performance, concurrence, application development, learning curve. This chapter deals with few of the top programming languages used for developing machine learning applications. They are Python, R, and Java. Top three programming languages preferred by data scientist are (1) Python more than 57%, (2) R more than 31%, and (3) Java used by 17% of the data scientists.


Sign in / Sign up

Export Citation Format

Share Document