scholarly journals Development of use-specific high-performance cyber-nanomaterial optical detectors by effective choice of machine learning algorithms

2020 ◽  
Vol 1 (2) ◽  
pp. 025007 ◽  
Author(s):  
Davoud Hejazi ◽  
Shuangjun Liu ◽  
Amirreza Farnoosh ◽  
Sarah Ostadabbas ◽  
Swastik Kar
Electronics ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 600
Author(s):  
Gianluca Cornetta ◽  
Abdellah Touhafi

Low-cost, high-performance embedded devices are proliferating and a plethora of new platforms are available on the market. Some of them either have embedded GPUs or the possibility to be connected to external Machine Learning (ML) algorithm hardware accelerators. These enhanced hardware features enable new applications in which AI-powered smart objects can effectively and pervasively run in real-time distributed ML algorithms, shifting part of the raw data analysis and processing from cloud or edge to the device itself. In such context, Artificial Intelligence (AI) can be considered as the backbone of the next generation of Internet of the Things (IoT) devices, which will no longer merely be data collectors and forwarders, but really “smart” devices with built-in data wrangling and data analysis features that leverage lightweight machine learning algorithms to make autonomous decisions on the field. This work thoroughly reviews and analyses the most popular ML algorithms, with particular emphasis on those that are more suitable to run on resource-constrained embedded devices. In addition, several machine learning algorithms have been built on top of a custom multi-dimensional array library. The designed framework has been evaluated and its performance stressed on Raspberry Pi III- and IV-embedded computers.


2020 ◽  
Vol 245 ◽  
pp. 08016
Author(s):  
Stefano Bagnasco ◽  
Gabriele Gaetano Fronzé ◽  
Federica Legger ◽  
Stefano Lusso ◽  
Sara Vallero

In recent years, proficiency in data science and machine learning (ML) became one of the most requested skills for jobs in both industry and academy. Machine learning algorithms typically require large sets of data to train the models and extensive usage of computing resources, both for training and inference. Especially for deep learning algorithms, training performances can be dramatically improved by exploiting Graphical Processing Units (GPUs). The needed skill set for a data scientist is therefore extremely broad, and ranges from knowledge of ML models to distributed programming on heterogeneous resources. While most of the available training resources focus on ML algorithms and tools such as TensorFlow, we designed a course for doctoral students where model training is tightly coupled with underlying technologies that can be used to dynamically provision resources. Throughout the course, students have access to a dedicated cluster of computing nodes on local premises. A set of libraries and helper functions is provided to execute a parallelized ML task by automatically deploying a Spark driver and several Spark execution nodes as Docker containers. Task scheduling is managed by an orchestration layer (Kubernetes). This solution automates the delivery of the software stack required by a typical ML workflow and enables scalability by allowing the execution of ML tasks, including training, over commodity (i.e. CPUs) or high-performance (i.e. GPUs) resources distributed over different hosts across a network. The adaptation of the same model on OCCAM, the HPC facility at the University of Turin, is currently under development.


2020 ◽  
Vol 12 (1) ◽  
Author(s):  
Riyad Alshammari ◽  
Noorah Atiyah ◽  
Tahani Daghistani ◽  
Abdulwahhab Alshammari

Diabetes is a salient issue and a significant health care concern for many nations. The forecast for the prevalence of diabetes is on the rise. Hence, building a prediction machine learning model to assist in the identification of diabetic patients is of great interest. This study aims to create a machine learning model that is capable of predicting diabetes with high performance. The following study used the BigML platform to train four machine learning algorithms, namely, Deepnet, Models (decision tree), Ensemble and Logistic Regression, on data sets collected from the Ministry of National Guard Hospital Affairs (MNGHA) in Saudi Arabia between the years of 2013 and 2015. The comparative evaluation criteria for the four algorithms examined included; Accuracy, Precision, Recall, F-measure and PhiCoefficient. Results show that the Deepnet algorithm achieved higher performance compared to other machine learning algorithms based on various evaluation matrices.


2019 ◽  
Vol 20 (8) ◽  
pp. 925-931 ◽  
Author(s):  
Gerhard-Paul Diller ◽  
Sonya Babu-Narayan ◽  
Wei Li ◽  
Jelena Radojevic ◽  
Aleksander Kempny ◽  
...  

Abstract Aims To investigate the utility of novel deep learning (DL) algorithms in recognizing transposition of the great arteries (TGA) after atrial switch procedure or congenitally corrected TGA (ccTGA) based on routine transthoracic echocardiograms. In addition, the ability of DL algorithms for delineation and segmentation of the systemic ventricle was evaluated. Methods and results In total, 132 patients (92 TGA and atrial switch and 40 with ccTGA; 60% male, age 38.3 ± 12.1 years) and 67 normal controls (57% male, age 48.5 ± 17.9 years) with routine transthoracic examinations were included. Convolutional neural networks were trained to classify patients by underlying diagnosis and a U-Net design was used to automatically segment the systemic ventricle. Convolutional networks were build based on over 100 000 frames of an apical four-chamber or parasternal short-axis view to detect underlying diagnoses. The DL algorithm had an overall accuracy of 98.0% in detecting the correct diagnosis. The U-Net architecture model correctly identified the systemic ventricle in all individuals and achieved a high performance in segmenting the systemic right or left ventricle (Dice metric between 0.79 and 0.88 depending on diagnosis) when compared with human experts. Conclusion Our study demonstrates the potential of machine learning algorithms, trained on routine echocardiographic datasets to detect underlying diagnosis in complex congenital heart disease. Automated delineation of the ventricular area was also feasible. These methods may in future allow for the longitudinal, objective, and automated assessment of ventricular function.


2021 ◽  
Vol 13(62) (2) ◽  
pp. 705-714
Author(s):  
Arpad Kerestely

Efficient High Performance Computing for Machine Learning has become a necessity in the past few years. Data is growing exponentially in domains like healthcare, government, economics and with the development of IoT, smartphones and gadgets. This big volume of data, needs a storage space which no traditional computing system can offer, and needs to be fed to Machine Learning algorithms so useful information can be extracted out of it. The larger the dataset that is fed to a Machine Learning algorithm the more precise the results will be, but also the time to compute those results will increase. Thus, the need for Efficient High Performance computing in the aid of faster and better Machine Learning algorithms. This paper aims to unveil how one benefits from another, what research has achieved so far and where is it heading.


Sign in / Sign up

Export Citation Format

Share Document