scholarly journals A Plastic Contamination Image Dataset for Deep Learning Model Development and Training

2020 ◽  
Vol 2 (2) ◽  
pp. 317-321
Author(s):  
Mathew G. Pelletier ◽  
Greg A. Holt ◽  
John D. Wanjura

The removal of plastic contamination in cotton lint is an issue of top priority for the U.S. cotton industry. One of the main sources of plastic contamination appearing in marketable cotton bales is plastic used to wrap cotton modules on cotton harvesters. To help mitigate plastic contamination at the gin, automatic inspection systems are needed to detect and control removal systems. Due to significant cost constraints in the U.S. cotton ginning industry, the use of low-cost color cameras for detection of plastic contamination has been successfully adopted. However, some plastics of similar color to background are difficult to detect when utilizing traditional machine learning algorithms. Hence, current detection/removal system designs are not able to remove all plastics and there is still a need for better detection methods. Recent advances in deep learning convolutional neural networks (CNNs) show promise for enabling the use of low-cost color cameras for detection of objects of interest when placed against a background of similar color. They do this by mimicking the human visual detection system, focusing on differences in texture rather than color as the primary detection paradigm. The key to leveraging the CNNs is the development of extensive image datasets required for training. One of the impediments to this methodology is the need for large image datasets where each image must be annotated with bounding boxes that surround each object of interest. As this requirement is labor-intensive, there is significant value in these image datasets. This report details the included image dataset as well as the system design used to collect the images. For acquisition of the image dataset, a prototype detection system was developed and deployed into a commercial cotton gin where images were collected for the duration of the 2018–2019 ginning season. A discussion of the observational impact that the system had on reduction of plastic contamination at the commercial gin, utilizing traditional color-based machine learning algorithms, is also included.

Author(s):  
Abdirahman Osman Hashi ◽  
Abdullahi Ahmed Abdirahman ◽  
Mohamed Abdirahman Elmi ◽  
Siti Zaiton Mohd Hashi ◽  
Octavio Ernesto Romo Rodriguez

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Rajat Garg ◽  
Anil Kumar ◽  
Nikunj Bansal ◽  
Manish Prateek ◽  
Shashi Kumar

AbstractUrban area mapping is an important application of remote sensing which aims at both estimation and change in land cover under the urban area. A major challenge being faced while analyzing Synthetic Aperture Radar (SAR) based remote sensing data is that there is a lot of similarity between highly vegetated urban areas and oriented urban targets with that of actual vegetation. This similarity between some urban areas and vegetation leads to misclassification of the urban area into forest cover. The present work is a precursor study for the dual-frequency L and S-band NASA-ISRO Synthetic Aperture Radar (NISAR) mission and aims at minimizing the misclassification of such highly vegetated and oriented urban targets into vegetation class with the help of deep learning. In this study, three machine learning algorithms Random Forest (RF), K-Nearest Neighbour (KNN), and Support Vector Machine (SVM) have been implemented along with a deep learning model DeepLabv3+ for semantic segmentation of Polarimetric SAR (PolSAR) data. It is a general perception that a large dataset is required for the successful implementation of any deep learning model but in the field of SAR based remote sensing, a major issue is the unavailability of a large benchmark labeled dataset for the implementation of deep learning algorithms from scratch. In current work, it has been shown that a pre-trained deep learning model DeepLabv3+ outperforms the machine learning algorithms for land use and land cover (LULC) classification task even with a small dataset using transfer learning. The highest pixel accuracy of 87.78% and overall pixel accuracy of 85.65% have been achieved with DeepLabv3+ and Random Forest performs best among the machine learning algorithms with overall pixel accuracy of 77.91% while SVM and KNN trail with an overall accuracy of 77.01% and 76.47% respectively. The highest precision of 0.9228 is recorded for the urban class for semantic segmentation task with DeepLabv3+ while machine learning algorithms SVM and RF gave comparable results with a precision of 0.8977 and 0.8958 respectively.


2021 ◽  
Vol 10 (2) ◽  
pp. 205846012199029
Author(s):  
Rani Ahmad

Background The scope and productivity of artificial intelligence applications in health science and medicine, particularly in medical imaging, are rapidly progressing, with relatively recent developments in big data and deep learning and increasingly powerful computer algorithms. Accordingly, there are a number of opportunities and challenges for the radiological community. Purpose To provide review on the challenges and barriers experienced in diagnostic radiology on the basis of the key clinical applications of machine learning techniques. Material and Methods Studies published in 2010–2019 were selected that report on the efficacy of machine learning models. A single contingency table was selected for each study to report the highest accuracy of radiology professionals and machine learning algorithms, and a meta-analysis of studies was conducted based on contingency tables. Results The specificity for all the deep learning models ranged from 39% to 100%, whereas sensitivity ranged from 85% to 100%. The pooled sensitivity and specificity were 89% and 85% for the deep learning algorithms for detecting abnormalities compared to 75% and 91% for radiology experts, respectively. The pooled specificity and sensitivity for comparison between radiology professionals and deep learning algorithms were 91% and 81% for deep learning models and 85% and 73% for radiology professionals (p < 0.000), respectively. The pooled sensitivity detection was 82% for health-care professionals and 83% for deep learning algorithms (p < 0.005). Conclusion Radiomic information extracted through machine learning programs form images that may not be discernible through visual examination, thus may improve the prognostic and diagnostic value of data sets.


Sensors ◽  
2020 ◽  
Vol 20 (20) ◽  
pp. 5953 ◽  
Author(s):  
Parastoo Alinia ◽  
Ali Samadani ◽  
Mladen Milosevic ◽  
Hassan Ghasemzadeh ◽  
Saman Parvaneh

Automated lying-posture tracking is important in preventing bed-related disorders, such as pressure injuries, sleep apnea, and lower-back pain. Prior research studied in-bed lying posture tracking using sensors of different modalities (e.g., accelerometer and pressure sensors). However, there remain significant gaps in research regarding how to design efficient in-bed lying posture tracking systems. These gaps can be articulated through several research questions, as follows. First, can we design a single-sensor, pervasive, and inexpensive system that can accurately detect lying postures? Second, what computational models are most effective in the accurate detection of lying postures? Finally, what physical configuration of the sensor system is most effective for lying posture tracking? To answer these important research questions, in this article we propose a comprehensive approach for designing a sensor system that uses a single accelerometer along with machine learning algorithms for in-bed lying posture classification. We design two categories of machine learning algorithms based on deep learning and traditional classification with handcrafted features to detect lying postures. We also investigate what wearing sites are the most effective in the accurate detection of lying postures. We extensively evaluate the performance of the proposed algorithms on nine different body locations and four human lying postures using two datasets. Our results show that a system with a single accelerometer can be used with either deep learning or traditional classifiers to accurately detect lying postures. The best models in our approach achieve an F1 score that ranges from 95.2% to 97.8% with a coefficient of variation from 0.03 to 0.05. The results also identify the thighs and chest as the most salient body sites for lying posture tracking. Our findings in this article suggest that, because accelerometers are ubiquitous and inexpensive sensors, they can be a viable source of information for pervasive monitoring of in-bed postures.


Author(s):  
Pratyush Kaware

In this paper a cost-effective sensor has been implemented to read finger bend signals, by attaching the sensor to a finger, so as to classify them based on the degree of bent as well as the joint about which the finger was being bent. This was done by testing with various machine learning algorithms to get the most accurate and consistent classifier. Finally, we found that Support Vector Machine was the best algorithm suited to classify our data, using we were able predict live state of a finger, i.e., the degree of bent and the joints involved. The live voltage values from the sensor were transmitted using a NodeMCU micro-controller which were converted to digital and uploaded on a database for analysis.


2021 ◽  
Author(s):  
Lamya Alderywsh ◽  
Aseel Aldawood ◽  
Ashwag Alasmari ◽  
Farah Aldeijy ◽  
Ghadah Alqubisy ◽  
...  

BACKGROUND There is a serious threat from fake news spreading in technologically advanced societies, including those in the Arab world, via deceptive machine-generated text. In the last decade, Arabic fake news identification has gained increased attention, and numerous detection approaches have revealed some ability to find fake news throughout various data sources. Nevertheless, many existing approaches overlook recent advancements in fake news detection, explicitly to incorporate machine learning algorithms system. OBJECTIVE Tebyan project aims to address the problem of fake news by developing a fake news detection system that employs machine learning algorithms to detect whether the news is fake or real in the context of Arab world. METHODS The project went through numerous phases using an iterative methodology to develop the system. This study analysis incorporated numerous stages using an iterative method to develop the system of misinformation and contextualize fake news regarding society's information. It consists of implementing the machine learning algorithms system using Python to collect genuine and fake news datasets. The study also assesses how information-exchanging behaviors can minimize and find the optimal source of authentication of the emergent news through system testing approaches. RESULTS The study revealed that the main deliverable of this project is the Tebyan system in the community, which allows the user to ensure the credibility of news in Arabic newspapers. It showed that the SVM classifier, on average, exhibited the highest performance results, resulting in 90% in every performance measure of sources. Moreover, the results indicate the second-best algorithm is the linear SVC since it resulted in 90% in performance measure with the societies' typical type of fake information. CONCLUSIONS The study concludes that conducting a system with machine learning algorithms using Python programming language allows the rapid measures of the users' perception to comment and rate the credibility result and subscribing to news email services.


2021 ◽  
Author(s):  
Celestine Udim Monday ◽  
Toyin Olabisi Odutola

Abstract Natural Gas production and transportation are at risk of Gas hydrate plugging especially when in offshore environments where temperature is low and pressure is high. These plugs can eventually block the pipeline, increase back pressure, stop production and ultimately rupture gas pipelines. This study seeks to develops machine learning models after a kinetic inhibitor to predict the gas hydrate formation and pressure changes within the natural gas flow line. Green hydrate inhibitor A, B and C were obtained as plant extracts and applied in low dosages (0.01 wt.% to 0.1 wt.%) on a 12meter skid-mounted hydrate closed flow loop. From the data generated, the optimal dosages of inhibitor A, B and C were observed to be 0.02 wt.%, 0.06 wt.% and 0.1 wt.% respectively. The data associated with these optimal dosages were fed to a set of supervised machine learning algorithms (Extreme gradient boost, Gradient boost regressor and Linear regressor) and a deep learning algorithm (Artificial Neural Network). The output results from the set of supervised learning algorithms and Deep Learning algorithms were compared in terms of their accuracies in predicting the hydrate formation and the pressure within the natural gas flow line. All models had accuracies greater than 90%. This result show that the application Machine learning to solving flow assurance problems is viable. The results show that it is viable to apply machine learning algorithms to solve flow assurance problems, analyzing data and getting reports which can improve accuracy and speed of on-site decision making process.


Sign in / Sign up

Export Citation Format

Share Document