scholarly journals Machine Learning Algorithms are Pre-Programmed to Humans

In the next 25 years, AI will evolve to the point where it will know more on an intellectual level than any human. In the next 50 or 100 years, an AI might know more than the entire population of the planet put together. At that point, there are serious questions to ask about whether this AI - which could design and program additional AI programs all on its own, read data from an almost infinite number of data sources, and control almost every connected device on the planet - will somehow rise in status to become more like a god, something that can write its own bible and draw humans to worship it. The problem is that The Machine Learning Algorithms are Pre-Programmed to Humans and may lead to Predatory Behavior like Bio-Robots [1].

2020 ◽  
Author(s):  
Michael Moor ◽  
Bastian Rieck ◽  
Max Horn ◽  
Catherine Jutzeler ◽  
Karsten Borgwardt

Background: Sepsis is among the leading causes of death in intensive care units (ICU) worldwide and its recognition, particularly in the early stages of the disease, remains a medical challenge. The advent of an affluence of available digital health data has created a setting in which machine learning can be used for digital biomarker discovery, with the ultimate goal to advance the early recognition of sepsis. Objective: To systematically review and evaluate studies employing machine learning for the prediction of sepsis in the ICU. Data sources: Using Embase, Google Scholar, PubMed/Medline, Scopus, and Web of Science, we systematically searched the existing literature for machine learning-driven sepsis onset prediction for patients in the ICU. Study eligibility criteria: All peer-reviewed articles using machine learning for the prediction of sepsis onset in adult ICU patients were included. Studies focusing on patient populations outside the ICU were excluded. Study appraisal and synthesis methods: A systematic review was performed according to the PRISMA guidelines. Moreover, a quality assessment of all eligible studies was performed. Results: Out of 974 identified articles, 22 and 21 met the criteria to be included in the systematic review and quality assessment, respectively. A multitude of machine learning algorithms were applied to refine the early prediction of sepsis. The quality of the studies ranged from "poor" (satisfying less than 40% of the quality criteria) to "very good" (satisfying more than 90% of the quality criteria). The majority of the studies (n= 19, 86.4%) employed an offline training scenario combined with a horizon evaluation, while two studies implemented an online scenario (n= 2,9.1%). The massive inter-study heterogeneity in terms of model development, sepsis definition, prediction time windows, and outcomes precluded a meta-analysis. Last, only 2 studies provided publicly-accessible source code and data sources fostering reproducibility. Limitations: Articles were only eligible for inclusion when employing machine learning algorithms for the prediction of sepsis onset in the ICU. This restriction led to the exclusion of studies focusing on the prediction of septic shock, sepsis-related mortality, and patient populations outside the ICU. Conclusions and key findings: A growing number of studies employs machine learning to31optimise the early prediction of sepsis through digital biomarker discovery. This review, however, highlights several shortcomings of the current approaches, including low comparability and reproducibility. Finally, we gather recommendations how these challenges can be addressed before deploying these models in prospective analyses. Systematic review registration number: CRD42020200133


Author(s):  
Mohsen Kamyab ◽  
Stephen Remias ◽  
Erfan Najmi ◽  
Sanaz Rabinia ◽  
Jonathan M. Waddell

The aim of deploying intelligent transportation systems (ITS) is often to help engineers and operators identify traffic congestion. The future of ITS-based traffic management is the prediction of traffic conditions using ubiquitous data sources. There are currently well-developed prediction models for recurrent traffic congestion such as during peak hour. However, there is a need to predict traffic congestion resulting from non-recurring events such as highway lane closures. As agencies begin to understand the value of collecting work zone data, rich data sets will emerge consisting of historical work zone information. In the era of big data, rich mobility data sources are becoming available that enable the application of machine learning to predict mobility for work zones. The purpose of this study is to utilize historical lane closure information with supervised machine learning algorithms to forecast spatio-temporal mobility for future lane closures. Various traffic data sources were collected from 1,160 work zones on Michigan interstates between 2014 and 2017. This study uses probe vehicle data to retrieve a mobility profile for these historical observations, and uses these profiles to apply random forest, XGBoost, and artificial neural network (ANN) classification algorithms. The mobility prediction results showed that the ANN model outperformed the other models by reaching up to 85% accuracy. The objective of this research was to show that machine learning algorithms can be used to capture patterns for non-recurrent traffic congestion even when hourly traffic volume is not available.


Drones ◽  
2020 ◽  
Vol 4 (2) ◽  
pp. 21 ◽  
Author(s):  
Francisco Rodríguez-Puerta ◽  
Rafael Alonso Ponce ◽  
Fernando Pérez-Rodríguez ◽  
Beatriz Águeda ◽  
Saray Martín-García ◽  
...  

Controlling vegetation fuels around human settlements is a crucial strategy for reducing fire severity in forests, buildings and infrastructure, as well as protecting human lives. Each country has its own regulations in this respect, but they all have in common that by reducing fuel load, we in turn reduce the intensity and severity of the fire. The use of Unmanned Aerial Vehicles (UAV)-acquired data combined with other passive and active remote sensing data has the greatest performance to planning Wildland-Urban Interface (WUI) fuelbreak through machine learning algorithms. Nine remote sensing data sources (active and passive) and four supervised classification algorithms (Random Forest, Linear and Radial Support Vector Machine and Artificial Neural Networks) were tested to classify five fuel-area types. We used very high-density Light Detection and Ranging (LiDAR) data acquired by UAV (154 returns·m−2 and ortho-mosaic of 5-cm pixel), multispectral data from the satellites Pleiades-1B and Sentinel-2, and low-density LiDAR data acquired by Airborne Laser Scanning (ALS) (0.5 returns·m−2, ortho-mosaic of 25 cm pixels). Through the Variable Selection Using Random Forest (VSURF) procedure, a pre-selection of final variables was carried out to train the model. The four algorithms were compared, and it was concluded that the differences among them in overall accuracy (OA) on training datasets were negligible. Although the highest accuracy in the training step was obtained in SVML (OA=94.46%) and in testing in ANN (OA=91.91%), Random Forest was considered to be the most reliable algorithm, since it produced more consistent predictions due to the smaller differences between training and testing performance. Using a combination of Sentinel-2 and the two LiDAR data (UAV and ALS), Random Forest obtained an OA of 90.66% in training and of 91.80% in testing datasets. The differences in accuracy between the data sources used are much greater than between algorithms. LiDAR growth metrics calculated using point clouds in different dates and multispectral information from different seasons of the year are the most important variables in the classification. Our results support the essential role of UAVs in fuelbreak planning and management and thus, in the prevention of forest fires.


2018 ◽  
Vol 7 (2.8) ◽  
pp. 472 ◽  
Author(s):  
Shruti Banerjee ◽  
Partha Sarathi Chakraborty ◽  
. .

SDN (Software Defined Network) is rapidly gaining importance of ‘programmable network’ infrastructure. The SDN architecture separates the Data plane (forwarding devices) and Control plane (controller of the SDN). This makes it easy to deploy new versions to the infrastructure and provides straightforward network virtualization. Distributed Denial-of-Service attack is a major cyber security threat to the SDN. It is equally vulnerable to both data plane and control plane. In this paper, machine learning algorithms such as Naïve Bayesian, KNN, K Means, K-Medoids, Linear Regression, use to classify the incoming traffic as usual or unusual. Above mentioned algorithms are measured using the two metrics: accuracy and detection rate. The best fit algorithm is applied to implement the signature IDS which forms the module 1 of the proposed IDS. Second Module uses open connections to state the exact node which is an attacker and to block that particular IP address by placing it in Access Control List (ACL), thus increasing the processing speed of SDN as a whole. 


2021 ◽  
Vol 9 (2) ◽  
pp. 1214-1219
Author(s):  
Sheha kothari, Et. al.

Artificial intelligence (AI) has made incredible progress, resulting in the most sophisticated software and standalone software. Meanwhile, the cyber domain has become a battleground for access, influence, security and control. This paper will discuss key AI technologies including machine learning in an effort to help understand their role in cyber security and the implications of this new technology. This paper discusses and highlights the different uses of machine learning in cyber security.


Author(s):  
А.Н. ВИНОГРАДОВ ◽  
А.С. СУРМАЧЕВ

Предлагается метод выявления характерных искажений речевого сигнала в системах подвижной радиосвязи в условиях априорной неопределенности относительно условий приема сигнала и его качества. Предлагаемый метод базируется на использовании алгоритмов машинного обучения, в частности, аппарата построения деревьев решений и их множеств. Приводится подробное описание используемых для классификации признаков сигналов, а также характеристики обучающей и контрольной выборок. Приведены фрагменты кода программ, отражающие основные ключевые моменты их работы, и экспериментально полученные результаты. It is proposed a method of detecting specific distortions in mobile communications systems under conditions of a priori uncertainty of signal reception conditions and its quality. The proposed method is based on the use of machine learning algorithms, in particular construction of decision trees and their ensembles. A detailed description of signal features used for classification, as well as characteristics of training and control samples, are provided. Program code fragments that implement basic working stages and experimentally obtained results are given.


Author(s):  
Ivo Bukovsky ◽  
Peter M. Benes ◽  
Martin Vesely

This chapter recalls the nonlinear polynomial neurons and their incremental and batch learning algorithms for both plant identification and neuro-controller adaptation. Authors explain and demonstrate the use of feed-forward as well as recurrent polynomial neurons for system approximation and control via fundamental, though for practice efficient machine learning algorithms such as Ridge Regression, Levenberg-Marquardt, and Conjugate Gradients, authors also discuss the use of novel optimizers such as ADAM and BFGS. Incremental gradient descent and RLS algorithms for plant identification and control are explained and demonstrated. Also, novel BIBS stability for recurrent HONUs and for closed control loops with linear plant and nonlinear (HONU) controller is discussed and demonstrated.


Author(s):  
Didem Özkul

With this article, I introduce the ‘algorithmic fix’ as a framework to analyze contemporary placemaking practices. I discuss how algorithmic practices of placemaking govern and control mobilities. I theorize such practices as the ‘algorithmic fix’, where location determination technologies, data practices, and machine learning algorithms are used together to ‘get a fix on’ our whereabouts with the aim of sorting and classifying both people and places. Through a case study of location intelligence, I demonstrate how these digital placemaking practices do not only control and prevent physical mobilities – they are designed to fix who we are and whom we may become with the aim of creating a predictable future. I focus on geo-profiling, geo-fencing, and predictive policing as three key aspects of location intelligence to present a discussion of how ‘algorithmic fix’ as a framework can provide valuable insights to analyzing contemporary placemaking practices.


2021 ◽  
Vol 8 ◽  
Author(s):  
Michael Moor ◽  
Bastian Rieck ◽  
Max Horn ◽  
Catherine R. Jutzeler ◽  
Karsten Borgwardt

Background: Sepsis is among the leading causes of death in intensive care units (ICUs) worldwide and its recognition, particularly in the early stages of the disease, remains a medical challenge. The advent of an affluence of available digital health data has created a setting in which machine learning can be used for digital biomarker discovery, with the ultimate goal to advance the early recognition of sepsis.Objective: To systematically review and evaluate studies employing machine learning for the prediction of sepsis in the ICU.Data Sources: Using Embase, Google Scholar, PubMed/Medline, Scopus, and Web of Science, we systematically searched the existing literature for machine learning-driven sepsis onset prediction for patients in the ICU.Study Eligibility Criteria: All peer-reviewed articles using machine learning for the prediction of sepsis onset in adult ICU patients were included. Studies focusing on patient populations outside the ICU were excluded.Study Appraisal and Synthesis Methods: A systematic review was performed according to the PRISMA guidelines. Moreover, a quality assessment of all eligible studies was performed.Results: Out of 974 identified articles, 22 and 21 met the criteria to be included in the systematic review and quality assessment, respectively. A multitude of machine learning algorithms were applied to refine the early prediction of sepsis. The quality of the studies ranged from “poor” (satisfying ≤ 40% of the quality criteria) to “very good” (satisfying ≥ 90% of the quality criteria). The majority of the studies (n = 19, 86.4%) employed an offline training scenario combined with a horizon evaluation, while two studies implemented an online scenario (n = 2, 9.1%). The massive inter-study heterogeneity in terms of model development, sepsis definition, prediction time windows, and outcomes precluded a meta-analysis. Last, only two studies provided publicly accessible source code and data sources fostering reproducibility.Limitations: Articles were only eligible for inclusion when employing machine learning algorithms for the prediction of sepsis onset in the ICU. This restriction led to the exclusion of studies focusing on the prediction of septic shock, sepsis-related mortality, and patient populations outside the ICU.Conclusions and Key Findings: A growing number of studies employs machine learning to optimize the early prediction of sepsis through digital biomarker discovery. This review, however, highlights several shortcomings of the current approaches, including low comparability and reproducibility. Finally, we gather recommendations how these challenges can be addressed before deploying these models in prospective analyses.Systematic Review Registration Number: CRD42020200133.


Pharmaceutics ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 1432
Author(s):  
Nimra Munir ◽  
Michael Nugent ◽  
Darren Whitaker ◽  
Marion McAfee

In the last few decades, hot-melt extrusion (HME) has emerged as a rapidly growing technology in the pharmaceutical industry, due to its various advantages over other fabrication routes for drug delivery systems. After the introduction of the ‘quality by design’ (QbD) approach by the Food and Drug Administration (FDA), many research studies have focused on implementing process analytical technology (PAT), including near-infrared (NIR), Raman, and UV–Vis, coupled with various machine learning algorithms, to monitor and control the HME process in real time. This review gives a comprehensive overview of the application of machine learning algorithms for HME processes, with a focus on pharmaceutical HME applications. The main current challenges in the application of machine learning algorithms for pharmaceutical processes are discussed, with potential future directions for the industry.


Sign in / Sign up

Export Citation Format

Share Document