scholarly journals Convolutional neural networks for improving image quality with noisy PET data

2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Josh Schaefferkoetter ◽  
Jianhua Yan ◽  
Claudia Ortega ◽  
Andrew Sertic ◽  
Eli Lechtman ◽  
...  

Abstract Goal PET is a relatively noisy process compared to other imaging modalities, and sparsity of acquisition data leads to noise in the images. Recent work has focused on machine learning techniques to improve PET images, and this study investigates a deep learning approach to improve the quality of reconstructed image volumes through denoising by a 3D convolution neural network. Potential improvements were evaluated within a clinical context by physician performance in a reading task. Methods A wide range of controlled noise levels was emulated from a set of chest PET data in patients with lung cancer, and a convolutional neural network was trained to denoise the reconstructed images using the full-count reconstructions as the ground truth. The benefits, over conventional Gaussian smoothing, were quantified across all noise levels by observer performance in an image ranking and lesion detection task. Results The CNN-denoised images were generally ranked by the physicians equal to or better than the Gaussian-smoothed images for all count levels, with the largest effects observed in the lowest-count image sets. For the CNN-denoised images, overall lesion contrast recovery was 60% and 90% at the 1 and 20 million count levels, respectively. Notwithstanding the reduced lesion contrast recovery in noisy data, the CNN-denoised images also yielded better lesion detectability in low count levels. For example, at 1 million true counts, the average true positive detection rate was around 40% for the CNN-denoised images and 30% for the smoothed images. Conclusion Significant improvements were found for CNN-denoising for very noisy images, and to some degree for all noise levels. The technique presented here offered however limited benefit for detection performance for images at the count levels routinely encountered in the clinic.

2021 ◽  
Vol 9 (2) ◽  
pp. 211
Author(s):  
Faisal Dharma Adhinata ◽  
Gita Fadila Fitriana ◽  
Aditya Wijayanto ◽  
Muhammad Pajar Kharisma Putra

Indonesia is an agricultural country with abundant agricultural products. One of the crops used as a staple food for Indonesians is corn. This corn plant must be protected from diseases so that the quality of corn harvest can be optimal. Early detection of disease in corn plants is needed so that farmers can provide treatment quickly and precisely. Previous research used machine learning techniques to solve this problem. The results of the previous research were not optimal because the amount of data used was slightly and less varied. Therefore, we propose a technique that can process lots and varied data, hoping that the resulting system is more accurate than the previous research. This research uses transfer learning techniques as feature extraction combined with Convolutional Neural Network as a classification. We analysed the combination of DenseNet201 with a Flatten or Global Average Pooling layer. The experimental results show that the accuracy produced by the combination of DenseNet201 with the Global Average Pooling layer is better than DenseNet201 with Flatten layer. The accuracy obtained is 93% which proves the proposed system is more accurate than previous studies.


A vast number of image processing and neural network approaches are currently being utilized in the analysis of various medical conditions. Malaria is a disease which can be diagnosed by examining blood smears. But when it is examined manually by the microscopist, the accuracy of diagnosis can be error-prone because it depends upon the quality of the smear and the expertise of microscopist in examining the smears. Among the various machine learning techniques, convolutional neural networks (CNN) promise relatively higher accuracy. We propose an Optimized Step-Increase CNN (OSICNN) model to classify red blood cell images taken from thin blood smear samples into infected and non-infected with the malaria parasite. The proposed OSICNN model consists of four convolutional layers and is showing comparable results when compared with other state of the art models. The accuracy of identifying parasite in RBC has been found to be 98.3% with the proposed model.


2019 ◽  
Vol 2019 (3) ◽  
pp. 191-209 ◽  
Author(s):  
Se Eun Oh ◽  
Saikrishna Sunkam ◽  
Nicholas Hopper

Abstract Recent advances in Deep Neural Network (DNN) architectures have received a great deal of attention due to their ability to outperform state-of-the-art machine learning techniques across a wide range of application, as well as automating the feature engineering process. In this paper, we broadly study the applicability of deep learning to website fingerprinting. First, we show that unsupervised DNNs can generate lowdimensional informative features that improve the performance of state-of-the-art website fingerprinting attacks. Second, when used as classifiers, we show that they can exceed performance of existing attacks across a range of application scenarios, including fingerprinting Tor website traces, fingerprinting search engine queries over Tor, defeating fingerprinting defenses, and fingerprinting TLS-encrypted websites. Finally, we investigate which site-level features of a website influence its fingerprintability by DNNs.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-12 ◽  
Author(s):  
Syed Atif Ali Shah ◽  
Irfan Uddin ◽  
Furqan Aziz ◽  
Shafiq Ahmad ◽  
Mahmoud Ahmad Al-Khasawneh ◽  
...  

Organizations can grow, succeed, and sustain if their employees are committed. The main assets of an organization are those employees who are giving it a required number of hours per month, in other words, those employees who are punctual towards their attendance. Absenteeism from work is a multibillion-dollar problem, and it costs money and decreases revenue. At the time of hiring an employee, organizations do not have an objective mechanism to predict whether an employee will be punctual towards attendance or will be habitually absent. For some organizations, it can be very difficult to deal with those employees who are not punctual, as firing may be either not possible or it may have a huge cost to the organization. In this paper, we propose Neural Networks and Deep Learning algorithms that can predict the behavior of employees towards punctuality at workplace. The efficacy of the proposed method is tested with traditional machine learning techniques, and the results indicate 90.6% performance in Deep Neural Network as compared to 73.3% performance in a single-layer Neural Network and 82% performance in Decision Tree, SVM, and Random Forest. The proposed model will provide a useful mechanism to organizations that are interested to know the behavior of employees at the time of hiring and can reduce the cost of paying to inefficient or habitually absent employees. This paper is a first study of its kind to analyze the patterns of absenteeism in employees using deep learning algorithms and helps the organization to further improve the quality of life of employees and hence reduce absenteeism.


Author(s):  
Feidu Akmel ◽  
Ermiyas Birihanu ◽  
Bahir Siraj

Software systems are any software product or applications that support business domains such as Manufacturing,Aviation, Health care, insurance and so on.Software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from other for this reason it is better to apply the software metrics to measure the quality of software. Attributes that we gathered from source code through software metrics can be an input for software defect predictor. Software defect are an error that are introduced by software developer and stakeholders. Finally, in this study we discovered the application of machine learning on software defect that we gathered from the previous research works.


Mathematics ◽  
2020 ◽  
Vol 8 (12) ◽  
pp. 2258
Author(s):  
Madhab Raj Joshi ◽  
Lewis Nkenyereye ◽  
Gyanendra Prasad Joshi ◽  
S. M. Riazul Islam ◽  
Mohammad Abdullah-Al-Wadud ◽  
...  

Enhancement of Cultural Heritage such as historical images is very crucial to safeguard the diversity of cultures. Automated colorization of black and white images has been subject to extensive research through computer vision and machine learning techniques. Our research addresses the problem of generating a plausible colored photograph of ancient, historically black, and white images of Nepal using deep learning techniques without direct human intervention. Motivated by the recent success of deep learning techniques in image processing, a feed-forward, deep Convolutional Neural Network (CNN) in combination with Inception- ResnetV2 is being trained by sets of sample images using back-propagation to recognize the pattern in RGB and grayscale values. The trained neural network is then used to predict two a* and b* chroma channels given grayscale, L channel of test images. CNN vividly colorizes images with the help of the fusion layer accounting for local features as well as global features. Two objective functions, namely, Mean Squared Error (MSE) and Peak Signal-to-Noise Ratio (PSNR), are employed for objective quality assessment between the estimated color image and its ground truth. The model is trained on the dataset created by ourselves with 1.2 K historical images comprised of old and ancient photographs of Nepal, each having 256 × 256 resolution. The loss i.e., MSE, PSNR, and accuracy of the model are found to be 6.08%, 34.65 dB, and 75.23%, respectively. Other than presenting the training results, the public acceptance or subjective validation of the generated images is assessed by means of a user study where the model shows 41.71% of naturalness while evaluating colorization results.


Vibration ◽  
2021 ◽  
Vol 4 (2) ◽  
pp. 341-356
Author(s):  
Jessada Sresakoolchai ◽  
Sakdirat Kaewunruen

Various techniques have been developed to detect railway defects. One of the popular techniques is machine learning. This unprecedented study applies deep learning, which is a branch of machine learning techniques, to detect and evaluate the severity of rail combined defects. The combined defects in the study are settlement and dipped joint. Features used to detect and evaluate the severity of combined defects are axle box accelerations simulated using a verified rolling stock dynamic behavior simulation called D-Track. A total of 1650 simulations are run to generate numerical data. Deep learning techniques used in the study are deep neural network (DNN), convolutional neural network (CNN), and recurrent neural network (RNN). Simulated data are used in two ways: simplified data and raw data. Simplified data are used to develop the DNN model, while raw data are used to develop the CNN and RNN model. For simplified data, features are extracted from raw data, which are the weight of rolling stock, the speed of rolling stock, and three peak and bottom accelerations from two wheels of rolling stock. In total, there are 14 features used as simplified data for developing the DNN model. For raw data, time-domain accelerations are used directly to develop the CNN and RNN models without processing and data extraction. Hyperparameter tuning is performed to ensure that the performance of each model is optimized. Grid search is used for performing hyperparameter tuning. To detect the combined defects, the study proposes two approaches. The first approach uses one model to detect settlement and dipped joint, and the second approach uses two models to detect settlement and dipped joint separately. The results show that the CNN models of both approaches provide the same accuracy of 99%, so one model is good enough to detect settlement and dipped joint. To evaluate the severity of the combined defects, the study applies classification and regression concepts. Classification is used to evaluate the severity by categorizing defects into light, medium, and severe classes, and regression is used to estimate the size of defects. From the study, the CNN model is suitable for evaluating dipped joint severity with an accuracy of 84% and mean absolute error (MAE) of 1.25 mm, and the RNN model is suitable for evaluating settlement severity with an accuracy of 99% and mean absolute error (MAE) of 1.58 mm.


Work ◽  
2021 ◽  
pp. 1-12
Author(s):  
Zhang Mengqi ◽  
Wang Xi ◽  
V.E. Sathishkumar ◽  
V. Sivakumar

BACKGROUND: Nowadays, the growth of smart cities is enhanced gradually, which collects a lot of information and communication technologies that are used to maximize the quality of services. Even though the intelligent city concept provides a lot of valuable services, security management is still one of the major issues due to shared threats and activities. For overcoming the above problems, smart cities’ security factors should be analyzed continuously to eliminate the unwanted activities that used to enhance the quality of the services. OBJECTIVES: To address the discussed problem, active machine learning techniques are used to predict the quality of services in the smart city manages security-related issues. In this work, a deep reinforcement learning concept is used to learn the features of smart cities; the learning concept understands the entire activities of the smart city. During this energetic city, information is gathered with the help of security robots called cobalt robots. The smart cities related to new incoming features are examined through the use of a modular neural network. RESULTS: The system successfully predicts the unwanted activity in intelligent cities by dividing the collected data into a smaller subset, which reduces the complexity and improves the overall security management process. The efficiency of the system is evaluated using experimental analysis. CONCLUSION: This exploratory study is conducted on the 200 obstacles are placed in the smart city, and the introduced DRL with MDNN approach attains maximum results on security maintains.


2021 ◽  
Vol 11 (7) ◽  
pp. 317
Author(s):  
Ismael Cabero ◽  
Irene Epifanio

This paper presents a snapshot of the distribution of time that Spanish academic staff spend on different tasks. We carry out a statistical exploratory study by analyzing the responses provided in a survey of 703 Spanish academic staff in order to draw a clear picture of the current situation. This analysis considers many factors, including primarily gender, academic ranks, age, and academic disciplines. The tasks considered are divided into smaller activities, which allows us to discover hidden patterns. Tasks are not only restricted to the academic world, but also relate to domestic chores. We address this problem from a totally new perspective by using machine learning techniques, such as cluster analysis. In order to make important decisions, policymakers must know how academic staff spend their time, especially now that legal modifications are planned for the Spanish university environment. In terms of the time spent on quality of teaching and caring tasks, we expose huge gender gaps. Non-recognized overtime is very frequent.


2021 ◽  
Vol 19 (2) ◽  
pp. 19-30
Author(s):  
G. Nagarajan ◽  
Dr.A. Mahabub Basha ◽  
R. Poornima

One main psychiatric disorder found in humans is ASD (Autistic Spectrum Disorder). The disease manifests in a mental disorder that restricts humans from communications, language, speech in terms of their individual abilities. Even though its cure is complex and literally impossible, its early detection is required for mitigating its intensity. ASD does not have a pre-defined age for affecting humans. A system for effectively predicting ASD based on MLTs (Machine Learning Techniques) is proposed in this work. Hybrid APMs (Autism Prediction Models) combining multiple techniques like RF (Random Forest), CART (Classification and Regression Trees), RF-ID3 (RF-Iterative Dichotomiser 3) perform well, but face issues in memory usage, execution times and inadequate feature selections. Taking these issues into account, this work overcomes these hurdles in this proposed work with a hybrid technique that combines MCSO (Modified Chicken Swarm Optimization) and PDCNN (Polynomial Distribution based Convolution Neural Network) algorithms for its objective. The proposed scheme’s experimental results prove its higher levels of accuracy, precision, sensitivity, specificity, FPRs (False Positive Rates) and lowered time complexity when compared to other methods.


Sign in / Sign up

Export Citation Format

Share Document