scholarly journals Optimization and Implementation of a Collaborative Learning Algorithm for an AI-Enabled Real-time Biomedical System

2021 ◽  
Vol 102 ◽  
pp. 04017
Author(s):  
Sinchhean Phea ◽  
Zhishang Wang ◽  
Jiangkun Wang ◽  
Abderazek Ben Abdallah

Recent years have witnessed a rapid growth of Artificial Intelligence (AI) in biomedical fields. However, an accurate and secure system for pneumonia detection and diagnosis is urgently needed. We present the optimization and implementation of a collaborative learning algorithm for an AI-Enabled Real-time Biomedical System (AIRBiS), where a convolution neural network is deployed for pneumonia (i.e., COVID-19) image classification. With augmentation optimization, the federated learning (FL) approach achieves a high accuracy of 95.66%, which outperforms the conventional learning approach with an accuracy of 94.08%. Using multiple edge devices also reduces overall training time.

2021 ◽  
Vol 11 (11) ◽  
pp. 4758
Author(s):  
Ana Malta ◽  
Mateus Mendes ◽  
Torres Farinha

Maintenance professionals and other technical staff regularly need to learn to identify new parts in car engines and other equipment. The present work proposes a model of a task assistant based on a deep learning neural network. A YOLOv5 network is used for recognizing some of the constituent parts of an automobile. A dataset of car engine images was created and eight car parts were marked in the images. Then, the neural network was trained to detect each part. The results show that YOLOv5s is able to successfully detect the parts in real time video streams, with high accuracy, thus being useful as an aid to train professionals learning to deal with new equipment using augmented reality. The architecture of an object recognition system using augmented reality glasses is also designed.


2019 ◽  
Vol 34 (11) ◽  
pp. 4924-4931 ◽  
Author(s):  
Daichi Kitaguchi ◽  
Nobuyoshi Takeshita ◽  
Hiroki Matsuzaki ◽  
Hiroaki Takano ◽  
Yohei Owada ◽  
...  

Author(s):  
Nima Kargah-Ostadi ◽  
Ammar Waqar ◽  
Adil Hanif

Roadway asset inventory data are essential in making data-driven asset management decisions. Despite significant advances in automated data processing, the current state of the practice is semi-automated. This paper demonstrates integration of the state-of-the-art artificial intelligence technologies within a practical framework for automated real-time identification of traffic signs from roadway images. The framework deploys one of the very latest machine learning algorithms on a cutting-edge plug-and-play device for superior effectiveness, efficiency, and reliability. The proposed platform provides an offline system onboard the survey vehicle, that runs a lightweight and speedy deep neural network on each collected roadway image and identifies traffic signs in real-time. Integration of these advanced technologies minimizes the need for subjective and time-consuming human interventions, thereby enhancing the repeatability and cost-effectiveness of the asset inventory process. The proposed framework is demonstrated using a real-world image dataset. Appropriate pre-processing techniques were employed to alleviate limitations in the training dataset. A deep learning algorithm was trained for detection, classification, and localization of traffic signs from roadway imagery. The success metrics based on this demonstration indicate that the algorithm was effective in identifying traffic signs with high accuracy on a test dataset that was not used for model development. Additionally, the algorithm exhibited this high accuracy consistently among the different considered sign categories. Moreover, the algorithm was repeatable among multiple runs and reproducible across different locations. Above all, the real-time processing capability of the proposed solution reduces the time between data collection and delivery, which enhances the data-driven decision-making process.


Sensors ◽  
2019 ◽  
Vol 19 (18) ◽  
pp. 3827 ◽  
Author(s):  
Minwoo Kim ◽  
Jaechan Cho ◽  
Seongjoo Lee ◽  
Yunho Jung

We propose an efficient hand gesture recognition (HGR) algorithm, which can cope with time-dependent data from an inertial measurement unit (IMU) sensor and support real-time learning for various human-machine interface (HMI) applications. Although the data extracted from IMU sensors are time-dependent, most existing HGR algorithms do not consider this characteristic, which results in the degradation of recognition performance. Because the dynamic time warping (DTW) technique considers the time-dependent characteristic of IMU sensor data, the recognition performance of DTW-based algorithms is better than that of others. However, the DTW technique requires a very complex learning algorithm, which makes it difficult to support real-time learning. To solve this issue, the proposed HGR algorithm is based on a restricted column energy (RCE) neural network, which has a very simple learning scheme in which neurons are activated when necessary. By replacing the metric calculation of the RCE neural network with DTW distance, the proposed algorithm exhibits superior recognition performance for time-dependent sensor data while supporting real-time learning. Our verification results on a field-programmable gate array (FPGA)-based test platform show that the proposed HGR algorithm can achieve a recognition accuracy of 98.6% and supports real-time learning and recognition at an operating frequency of 150 MHz.


Author(s):  
Revathi. P ◽  
Pallikonda Rajasekaran. M ◽  
Babiyola. D ◽  
Aruna. R

Process variables vary with time in certain applications. Monitoring systems let us avoid severe economic losses resulting from unexpected electric system failures by improving the system reliability and maintainability The installation and maintenance of such monitoring systems is easy when it is implemented using wireless techniques. ZigBee protocol, that is a wireless technology developed as open global standard to address the low-cost, low-power wireless sensor networks. The goal is to monitor the parameters and to classify the parameters in normal and abnormal conditions to detect fault in the process as early as possible by using artificial intelligent techniques. A key issue is to prevent local faults to be developed into system failures that may cause safety hazards, stop temporarily the production and possible detrimental environment impact. Several techniques are being investigated as an extension to the traditional fault detection and diagnosis. Computational intelligence techniques are being investigated as an extension to the traditional fault detection and diagnosis methods. This paper proposes ANFIS (Adaptive Neural Fuzzy Inference System) for fault detection and diagnosis. In ANFIS, the fuzzy logic will create the rules and membership functions whereas the neural network trains the membership function to get the best output. The output of ANFIS is compared with Back Propagation Algorithm (BPN) algorithm of neural network. The training and testing data required to develop the ANFIS model were generated at different operating conditions by running the process and by creating various faults in real time in a laboratory experimental model.


2020 ◽  
Vol 12 (21) ◽  
pp. 3508
Author(s):  
Mohammed Elhenawy ◽  
Huthaifa I. Ashqar ◽  
Mahmoud Masoud ◽  
Mohammed H. Almannaa ◽  
Andry Rakotonirainy ◽  
...  

As the Autonomous Vehicle (AV) industry is rapidly advancing, the classification of non-motorized (vulnerable) road users (VRUs) becomes essential to ensure their safety and to smooth operation of road applications. The typical practice of non-motorized road users’ classification usually takes significant training time and ignores the temporal evolution and behavior of the signal. In this research effort, we attempt to detect VRUs with high accuracy be proposing a novel framework that includes using Deep Transfer Learning, which saves training time and cost, to classify images constructed from Recurrence Quantification Analysis (RQA) that reflect the temporal dynamics and behavior of the signal. Recurrence Plots (RPs) were constructed from low-power smartphone sensors without using GPS data. The resulted RPs were used as inputs for different pre-trained Convolutional Neural Network (CNN) classifiers including constructing 227 × 227 images to be used for AlexNet and SqueezeNet; and constructing 224 × 224 images to be used for VGG16 and VGG19. Results show that the classification accuracy of Convolutional Neural Network Transfer Learning (CNN-TL) reaches 98.70%, 98.62%, 98.71%, and 98.71% for AlexNet, SqueezeNet, VGG16, and VGG19, respectively. Moreover, we trained resnet101 and shufflenet for a very short time using one epoch of data and then used them as weak learners, which yielded 98.49% classification accuracy. The results of the proposed framework outperform other results in the literature (to the best of our knowledge) and show that using CNN-TL is promising for VRUs classification. Because of its relative straightforwardness, ability to be generalized and transferred, and potential high accuracy, we anticipate that this framework might be able to solve various problems related to signal classification.


Water ◽  
2019 ◽  
Vol 12 (1) ◽  
pp. 96 ◽  
Author(s):  
Nobuaki Kimura ◽  
Ikuo Yoshinaga ◽  
Kenji Sekijima ◽  
Issaku Azechi ◽  
Daichi Baba

East Asian regions in the North Pacific have recently experienced severe riverine flood disasters. State-of-the-art neural networks are currently utilized as a quick-response flood model. Neural networks typically require ample time in the training process because of the use of numerous datasets. To reduce the computational costs, we introduced a transfer-learning approach to a neural-network-based flood model. For a concept of transfer leaning, once the model is pretrained in a source domain with large datasets, it can be reused in other target domains. After retraining parts of the model with the target domain datasets, the training time can be reduced due to reuse. A convolutional neural network (CNN) was employed because the CNN with transfer learning has numerous successful applications in two-dimensional image classification. However, our flood model predicts time-series variables (e.g., water level). The CNN with transfer learning requires a conversion tool from time-series datasets to image datasets in preprocessing. First, the CNN time-series classification was verified in the source domain with less than 10% errors for the variation in water level. Second, the CNN with transfer learning in the target domain efficiently reduced the training time by 1/5 of and a mean error difference by 15% of those obtained by the CNN without transfer learning, respectively. Our method can provide another novel flood model in addition to physical-based models.


2013 ◽  
Vol 433-435 ◽  
pp. 1388-1391 ◽  
Author(s):  
Wei Zhi Wang ◽  
Bing Han Liu

Traffic safety states can be divided into safe and dangerous according to the attributes of video images of traffic safety states. We propose a synergic neural network recognition model based on prototype pattern by analyzing various methods on intelligent video processing. Our proposed method realizes real time classification of traffic safety states with high accuracy of traffic safety states recognition. The experimental results validate that the accuracy of classification of proposed method arrives at 87.5%, increased by 16.2% compared to traditional neural network methods.


Sign in / Sign up

Export Citation Format

Share Document