scholarly journals Performance Prediction of Listed Companies in Smart Healthcare Industry: Based on Machine Learning Algorithms

2022 ◽  
Vol 2022 ◽  
pp. 1-7
Author(s):  
Baobao Dong ◽  
Xiangming Wang ◽  
Qi Cao

With the development of wireless network, communication technology, cloud platform, and Internet of Things (IOT), new technologies are gradually applied to the smart healthcare industry. The COVID-19 outbreak has brought more attention to the development of the emerging industry of smart healthcare. However, the development of this industry is restricted by factors such as long construction cycle, large investment in the early stage, and lagging return, and the listed companies also face the problem of financing difficulties. In this study, machine learning algorithm is used to predict performance, which can not only deal with a large amount of data and characteristic variables but also analyse different types of variables and predict their classification, increasing the stability and accuracy of the model and helping to solve the problem of poor performance prediction in the past. After analysing the sample data from 53 listed companies in smart healthcare industry, we argued that the conclusion of this study can not only provide reference for listed companies in smart healthcare industry to formulate their own strategies but also provide shareholders with strategies to avoid risks and help the development of this emerging industry.

Since the introduction of Machine Learning in the field of disease analysis and diagnosis, it has been revolutionized the industry by a big margin. And as a result, many frameworks for disease prognostics have been developed. This paperfocuses on the analysis of three different machine learning algorithms – Neural network, Naïve bayes and SVM on dementia. While the paper focuses more on comparison of the three algorithms, we also try to find out about the important features and causes related to dementia prognostication. Dementia is a severe neurological disease which renders a person unable to use memory and logic if not treated at the early stage so a correct implementation of fast machine learning algorithm may increase the chances of successful treatment. Analysis of the three algorithms will provide algorithm pathway to do further research and create a more complex system for disease prognostication.


2021 ◽  
Author(s):  
Howard Maile ◽  
Ji-Peng Olivia Li ◽  
Daniel Gore ◽  
Marcello Leucci ◽  
Padraig Mulholland ◽  
...  

BACKGROUND Keratoconus is a disorder characterized by progressive thinning and distortion of the cornea. If detected at an early stage corneal collagen cross linking can prevent disease progression and further visual loss. Whilst advanced forms are easily detected, reliably identifying subclinical disease can be problematic. A number of different machine learning algorithms have been used to improve the detection of subclinical keratoconus based on the analysis of single or multiple clinical measures such as corneal imaging, aberrometry, or biomechanical measurements. OBJECTIVE To survey and critically evaluate the literature on algorithmic detection of subclinical keratoconus and equivalent definitions. METHODS We performed a structured search of the following databases: Medical Literature Analysis and Retrieval System Online (MEDLINE), Excerpta Medica Database (EMBASE), Web of Science and Cochrane from Jan 1, 2010 to Oct 31, 2020. We included all full text studies that have used algorithms for the detection of subclinical keratoconus. We excluded studies that did not perform validation. RESULTS We compared the parameters measured and the design of the machine learning algorithms reported in 26 papers that met the inclusion criteria. All salient information required for detailed comparison including diagnostic criteria, demographic data, sample size, acquisition system, validation details, parameter inputs, machine learning algorithm and key results are reported in this study. CONCLUSIONS Machine learning has the potential to improve the detection of subclinical keratoconus or early keratoconus in routine ophthalmic practice. Presently there is no consensus regarding the corneal parameters that should be included for assessment and the optimal design for the machine learning algorithm. We have identified avenues for further research to improve early detection and stratification of patients for early intervention to prevent disease progression. CLINICALTRIAL N/A


Author(s):  
Sanjay Kumar Singh ◽  
Anjali Goyal

Cervical cancer is second most prevailing cancer in women all over the world and the Pap smear is one of the most popular techniques used to diagnosis cervical cancer at an early stage. Developing countries like India has to face the challenges in order to handle more cases day by day. In this article, various online and offline machine learning algorithms has been applied on benchmarked data sets to detect cervical cancer. This article also addresses the problem of segmentation with hybrid techniques and optimizes the number of features using extra tree classifiers. Accuracy, precision score, recall score, and F1 score are increasing in the proportion of data for training and attained up to 100% by some algorithms. Algorithm like logistic regression with L1 regularization has an accuracy of 100%, but it is too much costly in terms of CPU time in comparison to some of the algorithms which obtain 99% accuracy with less CPU time. The key finding in this article is the selection of the best machine learning algorithm with the highest accuracy. Cost effectiveness in terms of CPU time is also analysed.


Cancer is the term used to describe a class of disease in which abnormal cells divide uncontrolledly and invade body tis sues. There are more than 100 unique types of cancer. Breast cancer is one of the women's deadly disease. The prediction is done at the earlier stage and the results are accurate, the number of death per year can be reduced. So ultimately a new approach is needed to predict the level of cancer at the early stage which shows accurate results on prediction level. Hence Machine learning algorithms are used to predict the level of accuracy. Henceforth this paper analyze the different machine learning algorithm to predict the best levels of cancer and comparative statement was made about accuracy and the results showing SVM is more accurate.


Author(s):  
Ayomide Emmanuel Adesiyan

Manufacturing today considers data-drive business operations at different levels leading to the growth of various paradigms in manufacturing, of which emerged smart manufacturing. However data can be used to predict equipment failure rates, streamline and optimize inventory management and prioritize processes. The use of parameter tuning and optimization, grid-search, cross-validation, to predict the best performing machine learning algorithm. This research work evaluates the time potential failure-rates, against the lines which peaks and drops depending on its components RUL(Remaining Useful Life). The accuracy of the machine learning algorithms that are employed in this studies, are hence subjected to some metrics for evaluation, these are : MCC and AUC-ROC. This study has analyzed and evaluated some annoymized dataset from a manufacturing company, using some metrics and machine learning algorithms for performance prediction of their production lines using unsupervised learning. This study would served as a good reference for anyone wanting to use the best performance model, for further research work.


2020 ◽  
pp. 1-11
Author(s):  
Jie Liu ◽  
Lin Lin ◽  
Xiufang Liang

The online English teaching system has certain requirements for the intelligent scoring system, and the most difficult stage of intelligent scoring in the English test is to score the English composition through the intelligent model. In order to improve the intelligence of English composition scoring, based on machine learning algorithms, this study combines intelligent image recognition technology to improve machine learning algorithms, and proposes an improved MSER-based character candidate region extraction algorithm and a convolutional neural network-based pseudo-character region filtering algorithm. In addition, in order to verify whether the algorithm model proposed in this paper meets the requirements of the group text, that is, to verify the feasibility of the algorithm, the performance of the model proposed in this study is analyzed through design experiments. Moreover, the basic conditions for composition scoring are input into the model as a constraint model. The research results show that the algorithm proposed in this paper has a certain practical effect, and it can be applied to the English assessment system and the online assessment system of the homework evaluation system algorithm system.


2021 ◽  
pp. 1-17
Author(s):  
Ahmed Al-Tarawneh ◽  
Ja’afer Al-Saraireh

Twitter is one of the most popular platforms used to share and post ideas. Hackers and anonymous attackers use these platforms maliciously, and their behavior can be used to predict the risk of future attacks, by gathering and classifying hackers’ tweets using machine-learning techniques. Previous approaches for detecting infected tweets are based on human efforts or text analysis, thus they are limited to capturing the hidden text between tweet lines. The main aim of this research paper is to enhance the efficiency of hacker detection for the Twitter platform using the complex networks technique with adapted machine learning algorithms. This work presents a methodology that collects a list of users with their followers who are sharing their posts that have similar interests from a hackers’ community on Twitter. The list is built based on a set of suggested keywords that are the commonly used terms by hackers in their tweets. After that, a complex network is generated for all users to find relations among them in terms of network centrality, closeness, and betweenness. After extracting these values, a dataset of the most influential users in the hacker community is assembled. Subsequently, tweets belonging to users in the extracted dataset are gathered and classified into positive and negative classes. The output of this process is utilized with a machine learning process by applying different algorithms. This research build and investigate an accurate dataset containing real users who belong to a hackers’ community. Correctly, classified instances were measured for accuracy using the average values of K-nearest neighbor, Naive Bayes, Random Tree, and the support vector machine techniques, demonstrating about 90% and 88% accuracy for cross-validation and percentage split respectively. Consequently, the proposed network cyber Twitter model is able to detect hackers, and determine if tweets pose a risk to future institutions and individuals to provide early warning of possible attacks.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 656
Author(s):  
Xavier Larriva-Novo ◽  
Víctor A. Villagrá ◽  
Mario Vega-Barbas ◽  
Diego Rivera ◽  
Mario Sanz Rodrigo

Security in IoT networks is currently mandatory, due to the high amount of data that has to be handled. These systems are vulnerable to several cybersecurity attacks, which are increasing in number and sophistication. Due to this reason, new intrusion detection techniques have to be developed, being as accurate as possible for these scenarios. Intrusion detection systems based on machine learning algorithms have already shown a high performance in terms of accuracy. This research proposes the study and evaluation of several preprocessing techniques based on traffic categorization for a machine learning neural network algorithm. This research uses for its evaluation two benchmark datasets, namely UGR16 and the UNSW-NB15, and one of the most used datasets, KDD99. The preprocessing techniques were evaluated in accordance with scalar and normalization functions. All of these preprocessing models were applied through different sets of characteristics based on a categorization composed by four groups of features: basic connection features, content characteristics, statistical characteristics and finally, a group which is composed by traffic-based features and connection direction-based traffic characteristics. The objective of this research is to evaluate this categorization by using various data preprocessing techniques to obtain the most accurate model. Our proposal shows that, by applying the categorization of network traffic and several preprocessing techniques, the accuracy can be enhanced by up to 45%. The preprocessing of a specific group of characteristics allows for greater accuracy, allowing the machine learning algorithm to correctly classify these parameters related to possible attacks.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 617
Author(s):  
Umer Saeed ◽  
Young-Doo Lee ◽  
Sana Ullah Jan ◽  
Insoo Koo

Sensors’ existence as a key component of Cyber-Physical Systems makes it susceptible to failures due to complex environments, low-quality production, and aging. When defective, sensors either stop communicating or convey incorrect information. These unsteady situations threaten the safety, economy, and reliability of a system. The objective of this study is to construct a lightweight machine learning-based fault detection and diagnostic system within the limited energy resources, memory, and computation of a Wireless Sensor Network (WSN). In this paper, a Context-Aware Fault Diagnostic (CAFD) scheme is proposed based on an ensemble learning algorithm called Extra-Trees. To evaluate the performance of the proposed scheme, a realistic WSN scenario composed of humidity and temperature sensor observations is replicated with extreme low-intensity faults. Six commonly occurring types of sensor fault are considered: drift, hard-over/bias, spike, erratic/precision degradation, stuck, and data-loss. The proposed CAFD scheme reveals the ability to accurately detect and diagnose low-intensity sensor faults in a timely manner. Moreover, the efficiency of the Extra-Trees algorithm in terms of diagnostic accuracy, F1-score, ROC-AUC, and training time is demonstrated by comparison with cutting-edge machine learning algorithms: a Support Vector Machine and a Neural Network.


Metabolites ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 363
Author(s):  
Louise Cottle ◽  
Ian Gilroy ◽  
Kylie Deng ◽  
Thomas Loudovaris ◽  
Helen E. Thomas ◽  
...  

Pancreatic β cells secrete the hormone insulin into the bloodstream and are critical in the control of blood glucose concentrations. β cells are clustered in the micro-organs of the islets of Langerhans, which have a rich capillary network. Recent work has highlighted the intimate spatial connections between β cells and these capillaries, which lead to the targeting of insulin secretion to the region where the β cells contact the capillary basement membrane. In addition, β cells orientate with respect to the capillary contact point and many proteins are differentially distributed at the capillary interface compared with the rest of the cell. Here, we set out to develop an automated image analysis approach to identify individual β cells within intact islets and to determine if the distribution of insulin across the cells was polarised. Our results show that a U-Net machine learning algorithm correctly identified β cells and their orientation with respect to the capillaries. Using this information, we then quantified insulin distribution across the β cells to show enrichment at the capillary interface. We conclude that machine learning is a useful analytical tool to interrogate large image datasets and analyse sub-cellular organisation.


Sign in / Sign up

Export Citation Format

Share Document