scholarly journals Use of Deep Learning Networks and Statistical Modeling to Predict Changes in Mechanical Parameters of Contaminated Bone Cements

Materials ◽  
2020 ◽  
Vol 13 (23) ◽  
pp. 5419 ◽  
Author(s):  
Anna Machrowska ◽  
Jakub Szabelski ◽  
Robert Karpiński ◽  
Przemysław Krakowski ◽  
Józef Jonak ◽  
...  

The purpose of the study was to test the usefulness of deep learning artificial neural networks and statistical modeling in predicting the strength of bone cements with defects. The defects are related to the introduction of admixtures, such as blood or saline, as contaminants into the cement at the preparation stage. Due to the wide range of applications of deep learning, among others in speech recognition, bioinformation processing, and medication design, the extent was checked to which it is possible to obtain information related to the prediction of the compressive strength of bone cements. Development and improvement of deep learning network (DLN) algorithms and statistical modeling in the analysis of changes in the mechanical parameters of the tested materials will enable determining an acceptable margin of error during surgery or cement preparation in relation to the expected strength of the material used to fill bone cavities. The use of the abovementioned computer methods may, therefore, play a significant role in the initial qualitative assessment of the effects of procedures and, thus, mitigation of errors resulting in failure to maintain the required mechanical parameters and patient dissatisfaction.

2021 ◽  
Vol 11 (1) ◽  
pp. 339-348
Author(s):  
Piotr Bojarczak ◽  
Piotr Lesiak

Abstract The article uses images from Unmanned Aerial Vehicles (UAVs) for rail diagnostics. The main advantage of such a solution compared to traditional surveys performed with measuring vehicles is the elimination of decreased train traffic. The authors, in the study, limited themselves to the diagnosis of hazardous split defects in rails. An algorithm has been proposed to detect them with an efficiency rate of about 81% for defects not less than 6.9% of the rail head width. It uses the FCN-8 deep-learning network, implemented in the Tensorflow environment, to extract the rail head by image segmentation. Using this type of network for segmentation increases the resistance of the algorithm to changes in the recorded rail image brightness. This is of fundamental importance in the case of variable conditions for image recording by UAVs. The detection of these defects in the rail head is performed using an algorithm in the Python language and the OpenCV library. To locate the defect, it uses the contour of a separate rail head together with a rectangle circumscribed around it. The use of UAVs together with artificial intelligence to detect split defects is an important element of novelty presented in this work.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1031
Author(s):  
Joseba Gorospe ◽  
Rubén Mulero ◽  
Olatz Arbelaitz ◽  
Javier Muguerza ◽  
Miguel Ángel Antón

Deep learning techniques are being increasingly used in the scientific community as a consequence of the high computational capacity of current systems and the increase in the amount of data available as a result of the digitalisation of society in general and the industrial world in particular. In addition, the immersion of the field of edge computing, which focuses on integrating artificial intelligence as close as possible to the client, makes it possible to implement systems that act in real time without the need to transfer all of the data to centralised servers. The combination of these two concepts can lead to systems with the capacity to make correct decisions and act based on them immediately and in situ. Despite this, the low capacity of embedded systems greatly hinders this integration, so the possibility of being able to integrate them into a wide range of micro-controllers can be a great advantage. This paper contributes with the generation of an environment based on Mbed OS and TensorFlow Lite to be embedded in any general purpose embedded system, allowing the introduction of deep learning architectures. The experiments herein prove that the proposed system is competitive if compared to other commercial systems.


Author(s):  
Lu Gao ◽  
Yao Yu ◽  
Yi Hao Ren ◽  
Pan Lu

Pavement maintenance and rehabilitation (M&R) records are important as they provide documentation that M&R treatment is being performed and completed appropriately. Moreover, the development of pavement performance models relies heavily on the quality of the condition data collected and on the M&R records. However, the history of pavement M&R activities is often missing or unavailable to highway agencies for many reasons. Without accurate M&R records, it is difficult to determine if a condition change between two consecutive inspections is the result of M&R intervention, deterioration, or measurement errors. In this paper, we employed deep-learning networks of a convolutional neural network (CNN) model, a long short-term memory (LSTM) model, and a CNN-LSTM combination model to automatically detect if an M&R treatment was applied to a pavement section during a given time period. Unlike conventional analysis methods so far followed, deep-learning techniques do not require any feature extraction. The maximum accuracy obtained for test data is 87.5% using CNN-LSTM.


Author(s):  
Xia Yu ◽  
Tao Yang ◽  
Jingyi Lu ◽  
Yun Shen ◽  
Wei Lu ◽  
...  

AbstractBlood glucose (BG) prediction is an effective approach to avoid hyper- and hypoglycemia, and achieve intelligent glucose management for patients with type 1 or serious type 2 diabetes. Recent studies have tended to adopt deep learning networks to obtain improved prediction models and more accurate prediction results, which have often required significant quantities of historical continuous glucose-monitoring (CGM) data. However, for new patients with limited historical dataset, it becomes difficult to establish an acceptable deep learning network for glucose prediction. Consequently, the goal of this study was to design a novel prediction framework with instance-based and network-based deep transfer learning for cross-subject glucose prediction based on segmented CGM time series. Taking the effects of biodiversity into consideration, dynamic time warping (DTW) was applied to determine the proper source domain dataset that shared the greatest degree of similarity for new subjects. After that, a network-based deep transfer learning method was designed with cross-domain dataset to obtain a personalized model combined with improved generalization capability. In a case study, the clinical dataset demonstrated that, with additional segmented dataset from other subjects, the proposed deep transfer learning framework achieved more accurate glucose predictions for new subjects with type 2 diabetes.


Author(s):  
Layth Kamil Adday Almajmaie ◽  
Ahmed Raad Raheem ◽  
Wisam Ali Mahmood ◽  
Saad Albawi

<span>The segmented brain tissues from magnetic resonance images (MRI) always pose substantive challenges to the clinical researcher community, especially while making precise estimation of such tissues. In the recent years, advancements in deep learning techniques, more specifically in fully convolution neural networks (FCN) have yielded path breaking results in segmenting brain tumour tissues with pin-point accuracy and precision, much to the relief of clinical physicians and researchers alike. A new hybrid deep learning architecture combining SegNet and U-Net techniques to segment brain tissue is proposed here. Here, a skip connection of the concerned U-Net network was suitably explored. The results indicated optimal multi-scale information generated from the SegNet, which was further exploited to obtain precise tissue boundaries from the brain images. Further, in order to ensure that the segmentation method performed better in conjunction with precisely delineated contours, the output is incorporated as the level set layer in the deep learning network. The proposed method primarily focused on analysing brain tumor segmentation (BraTS) 2017 and BraTS 2018, dedicated datasets dealing with MRI brain tumour. The results clearly indicate better performance in segmenting brain tumours than existing ones.</span>


2019 ◽  
Vol 59 (1) ◽  
pp. 426
Author(s):  
James Lowell ◽  
Jacob Smith

The interpretation of key horizons on seismic data is an essential but time-consuming part of the subsurface workflow. This is compounded when surfaces need to be re-interpreted on variations of the same data, such as angle stacks, 4D data, or reprocessed data. Deep learning networks, which are a subset of machine learning, have the potential to automate this reinterpretation process, and significantly increase the efficiency of the subsurface workflow. This study investigates whether a deep learning network can learn from a single horizon interpretation in order to identify that event in a different version of the same data. The results were largely successful with the target horizon correctly identified in an alternative offset stack, and was correctly repositioned in areas where there was misalignment between the training data and the test data.


Author(s):  
Vijayarajan Rajangam ◽  
Sangeetha N. ◽  
Karthik R. ◽  
Kethepalli Mallikarjuna

Multimodal imaging systems assist medical practitioners in cost-effective diagnostic methods in clinical pathologies. Multimodal imaging of the same organ or the region of interest reveals complementing anatomical and functional details. Multimodal image fusion algorithms integrate complementary image details into a composite image that reduces clinician's time for effective diagnosis. Deep learning networks have their role in feature extraction for the fusion of multimodal images. This chapter analyzes the performance of a pre-trained VGG19 deep learning network that extracts features from the base and detail layers of the source images for constructing a weight map to fuse the source image details. Maximum and averaging fusion rules are adopted for base layer fusion. The performance of the fusion algorithm for multimodal medical image fusion is analyzed by peak signal to noise ratio, structural similarity index, fusion factor, and figure of merit. Performance analysis of the fusion algorithms is also carried out for the source images with the presence of impulse and Gaussian noise.


2020 ◽  
Vol 10 (18) ◽  
pp. 6502
Author(s):  
Shinjin Kang ◽  
Jong-in Choi

On the game screen, the UI interface provides key information for game play. A vision deep learning network exploits pure pixel information in the screen. Apart from this, if we separately extract the information provided by the UI interface and use it as an additional input value, we can enhance the learning efficiency of deep learning networks. To this end, by effectively segmenting UI interface components such as buttons, image icons, and gauge bars on the game screen, we should be able to separately analyze only the relevant images. In this paper, we propose a methodology that segments UI components in a game by using synthetic game images created on a game engine. We developed a tool that approximately detected the UI areas of an image in games on the game screen and generated a large amount of synthetic labeling data through this. By training this data on a Pix2Pix, we applied UI segmentation. The network trained in this way can segment the UI areas of the target game regardless of the position of the corresponding UI components. Our methodology can help analyze the game screen without applying data augmentation to the game screen. It can also help vision researchers who should extract semantic information from game image data.


Author(s):  
Ashwan A. Abdulmunem ◽  
Zinah Abdulridha Abutiheen ◽  
Hiba J. Aleqabie

Corona virus disease (COVID-19) has an incredible influence in the last few months. It causes thousands of deaths in round the world. This make a rapid research movement to deal with this new virus. As a computer science, many technical researches have been done to tackle with it by using image processing algorithms. In this work, we introduce a method based on deep learning networks to classify COVID-19 based on x-ray images. Our results are encouraging to rely on to classify the infected people from the normal. We conduct our experiments on recent dataset, Kaggle dataset of COVID-19 X-ray images and using ResNet50 deep learning network with 5 and 10 folds cross validation. The experiments results show that 5 folds gives effective results than 10 folds with accuracy rate 97.28%.


2018 ◽  
Author(s):  
Zhi Zhou ◽  
Hsien-Chi Kuo ◽  
Hanchuan Peng ◽  
Fuhui Long

AbstractReconstructing three-dimensional (3D) morphology of neurons is essential to understanding brain structures and functions. Over the past decades, a number of neuron tracing tools including manual, semi-automatic, and fully automatic approaches have been developed to extract and analyze 3D neuronal structures. Nevertheless, most of them were developed based on coding certain rules to extract and connect structural components of a neuron, showing limited performance on complicated neuron morphology. Recently, deep learning outperforms many other machine learning methods in a wide range of image analysis and computer vision tasks. Here we developed a new open source toolbox, DeepNeuron, which uses deep learning networks to learn features and rules from data and trace neuron morphology in light microscopy images. DeepNeuron provides a family of modules to solve basic yet challenging problems in neuron tracing. These problems include but not limited to: (1) detecting neuron signal under different image conditions, (2) connecting neuronal signals into tree(s), (3) pruning and refining tree morphology, (4) quantifying the quality of morphology, and (5) classifying dendrites and axons in real time. We have tested DeepNeuron using light microscopy images including bright-field and confocal images of human and mouse brain, on which DeepNeuron demonstrates robustness and accuracy in neuron tracing.


Sign in / Sign up

Export Citation Format

Share Document