deep learning neural network
Recently Published Documents


TOTAL DOCUMENTS

302
(FIVE YEARS 248)

H-INDEX

17
(FIVE YEARS 10)

Cancers ◽  
2022 ◽  
Vol 14 (2) ◽  
pp. 352
Author(s):  
Anyou Wang ◽  
Rong Hai ◽  
Paul J. Rider ◽  
Qianchuan He

Detecting cancers at early stages can dramatically reduce mortality rates. Therefore, practical cancer screening at the population level is needed. To develop a comprehensive detection system to classify multiple cancer types. We integrated an artificial intelligence deep learning neural network and noncoding RNA biomarkers selected from massive data. Our system can accurately detect cancer vs. healthy objects with 96.3% of AUC of ROC (Area Under Curve of a Receiver Operating Characteristic curve), and it surprisingly reaches 78.77% of AUC when validated by real-world raw data from a completely independent data set. Even validating with raw exosome data from blood, our system can reach 72% of AUC. Moreover, our system significantly outperforms conventional machine learning models, such as random forest. Intriguingly, with no more than six biomarkers, our approach can easily discriminate any individual cancer type vs. normal with 99% to 100% AUC. Furthermore, a comprehensive marker panel can simultaneously multi-classify common cancers with a stable 82.15% accuracy rate for heterogeneous cancerous tissues and conditions.: This detection system provides a promising practical framework for automatic cancer screening at population level. Key points: (1) We developed a practical cancer screening system, which is simple, accurate, affordable, and easy to operate. (2) Our system binarily classify cancers vs. normal with >96% AUC. (3) In total, 26 individual cancer types can be easily detected by our system with 99 to 100% AUC. (4) The system can detect multiple cancer types simultaneously with >82% accuracy.


2022 ◽  
pp. 25-49
Author(s):  
Hany Hassanin ◽  
Prveen Bidare ◽  
Yahya Zweiri ◽  
Khamis Essa

Artificial intelligence and additive manufacturing are primary drivers of Industry 4.0, which is reshaping the manufacturing industry. Based on the progressive layer-by-layer principle, additive manufacturing allows for the manufacturing of mechanical parts with a high degree of complexity. In this chapter, a deep learning neural network (DLNN) is introduced to rationalize the effect of cellular structure design factors as well as process variables on physical and mechanical properties utilizing laser powder bed fusion. The models developed were validated and utilized to create process maps. For both design and process optimization, the trained deep learning neural network model showed the highest accuracy. Deep learning neural networks were found to be an effective technique for predicting material properties from limited data sets, as per the findings.


Author(s):  
Stefano Feraco ◽  
Angelo Bonfitto ◽  
Nicola Amati ◽  
Andrea Tonoli

This paper presents a redundant multi-object detection method for autonomous driving, exploiting a combination of Light Detection and Ranging (LiDAR) and stereocamera sensors to detect different obstacles. These sensors are used for distinct perception pipelines considering a custom hardware/software architecture deployed on a self-driving electric racing vehicle. Consequently, the creation of a local map with respect to the vehicle position enables development of further local trajectory planning algorithms. The LiDAR-based algorithm exploits segmentation of point clouds for the ground filtering and obstacle detection. The stereocamerabased perception pipeline is based on a Single Shot Detector using a deep learning neural network. The presented algorithm is experimentally validated on the instrumented vehicle during different driving maneuvers.


2022 ◽  
Vol 355 ◽  
pp. 02021
Author(s):  
Zeshu Li ◽  
Mingchao Xia ◽  
Qifang Chen

This paper presents a life prediction method based on the parameters of the actual operation history data collected by the existing converter power unit sensors. Firstly, the characteristics of junction temperature curves of forced air-cooled radiator and power unit are extracted, and the deep learning neural network architecture is constructed based on the characteristics. Then the thermoelectric coupling model of power unit based on thermal resistance calculation theory is established, and the cumulative loss is obtained from the measured data. The deep learning network is trained and the model prediction is verified. Finally, the power unit loss distribution under different setting temperature thresholds and the correlation analysis with radiator parameters are obtained, which provides a feasible scheme for parameter setting and life prediction.


2021 ◽  
pp. 014459872110681
Author(s):  
Tamer Khatib ◽  
Ameera Gharaba ◽  
Zain Haj Hamad ◽  
Aladdin Masri

This paper presents deep learning neural network models for photovoltaic output current prediction. The proposed models are long short-term memory and gated recurrent unit neural networks. The proposed models can predict photovoltaic output current for each second for a week time by using global solar radiation and ambient temperature values as inputs. These models can predict the output current of the photovoltaic system for the upcoming seven days after being trained by half-day data only. Python environment is used to develop the proposed models, and experimental data of a 1.4 kWp PV system are used to train, validate and test the proposed models. Highly uncertain data with steps in seconds are used in this research. Results show that the proposed models can accurately predict photovoltaic output current whereas the average values of the root mean square error of the predicted values by the proposed LSTM and GRU are 0.28 A and 0.27 A (the maximum current of the system is 7.91 A). In addition, results show that GRU is slightly more accurate than LSTM for this purpose and utilises less processor capacity. Finally, a comparison with other similar methods is conducted so as to show the significance of the proposed models.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 15
Author(s):  
David Černý ◽  
Josef Dobeš

In this paper, a special method based on the neural network is presented, which is conveniently used to precompute the steps of numerical integration. This method approximates the behaviour of the numerical integrator with respect to the local truncation error. In other words, it allows the precomputation of the individual steps in such a way that they do not need to be estimated by an algorithm but can be directly estimated by a neural network. Experimental tests were performed on a series of electrical circuits with different component parameters. The method was tested for two integration methods implemented in the simulation program SPICE (Trapez and Gear). For each type of circuit, a custom network was trained. Experimental simulations showed that for well-defined problems with a sufficiently trained network, the method allows in most cases reducing the total number of iteration steps performed by the algorithm during the simulation computation. Applications of this method, drawbacks, and possible further optimizations are also discussed.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yining Du

With the development of neural networks in deep learning, artificial intelligence machine learning has become the main focus of researchers. In College English grammar detection, oral grammar is the most error rate content. So, this paper optimizes MLP based on GA in the deep learning neural network and then studies the intelligent image correction of College English spoken grammar. The main direction is to discuss and analyze GA-MLP-NN algorithm technology first and then predict the error correction model of spoken language grammar by combining the optimized algorithm. The results show that GA-MLP-NN provides excellent accuracy for the prediction of the whole syntax error correction model. Then, the paper studies the deep learning technology to build an intelligent image error correction model of College English spoken grammar. The results show that the effect of intelligent correction of spoken grammar is very fast and accurate.


Webology ◽  
2021 ◽  
Vol 18 (Special Issue 04) ◽  
pp. 1470-1478
Author(s):  
R. Lavanya ◽  
Ebani Gogia ◽  
Nihal Rai

Recommendation system is a crucial part of offering items especially in services that offer streaming. For streaming movie services on OTT, RS are a helping hand for users in finding new movies for leisure. In this paper, we propose a machine learning an approach based on auto encoders to produce a CF system which outputs movie rating for a user based on a huge DB of ratings from other users. Utilising Movie Lens dataset, we explore the use of deep learning neural network based Stacked Auto encoders to predict user s ratings on new movies, thereby enabling movie recommendations. We consequently implement Singular Value Decomposition (SVD) to recommend movies to users. The experimental result showcase that our R S out performs a user-based neighbourhood baseline in terms of MSE on predicted ratings and in a survey in which user judge between recommendation s from both systems.


Sign in / Sign up

Export Citation Format

Share Document