scholarly journals Acoustic emission for in situ quality monitoring in additive manufacturing using spectral convolutional neural networks

2018 ◽  
Vol 21 ◽  
pp. 598-604 ◽  
Author(s):  
S.A. Shevchik ◽  
C. Kenel ◽  
C. Leinenbach ◽  
K. Wasmer
2019 ◽  
Vol 15 (9) ◽  
pp. 5194-5203 ◽  
Author(s):  
Sergey A. Shevchik ◽  
Giulio Masinelli ◽  
Christoph Kenel ◽  
Christian Leinenbach ◽  
Kilian Wasmer

Author(s):  
Y. A. Lumban-Gaol ◽  
K. A. Ohori ◽  
R. Y. Peters

Abstract. Satellite-Derived Bathymetry (SDB) has been used in many applications related to coastal management. SDB can efficiently fill data gaps obtained from traditional measurements with echo sounding. However, it still requires numerous training data, which is not available in many areas. Furthermore, the accuracy problem still arises considering the linear model could not address the non-relationship between reflectance and depth due to bottom variations and noise. Convolutional Neural Networks (CNN) offers the ability to capture the connection between neighbouring pixels and the non-linear relationship. These CNN characteristics make it compelling to be used for shallow water depth extraction. We investigate the accuracy of different architectures using different window sizes and band combinations. We use Sentinel-2 Level 2A images to provide reflectance values, and Lidar and Multi Beam Echo Sounder (MBES) datasets are used as depth references to train and test the model. A set of Sentinel-2 and in-situ depth subimage pairs are extracted to perform CNN training. The model is compared to the linear transform and applied to two other study areas. Resulting accuracy ranges from 1.3 m to 1.94 m, and the coefficient of determination reaches 0.94. The SDB model generated using a window size of 9x9 indicates compatibility with the reference depths, especially at areas deeper than 15 m. The addition of both short wave infrared bands to the four visible bands in training improves the overall accuracy of SDB. The implementation of the pre-trained model to other study areas provides similar results depending on the water conditions.


2018 ◽  
Vol 5 (5) ◽  
pp. 939-945 ◽  
Author(s):  
Grace X. Gu ◽  
Chun-Teh Chen ◽  
Deon J. Richmond ◽  
Markus J. Buehler

A new approach to design hierarchical materials using convolutional neural networks is proposed and validated through additive manufacturing and testing.


Weed Science ◽  
2018 ◽  
Vol 67 (2) ◽  
pp. 239-245 ◽  
Author(s):  
Shaun M. Sharpe ◽  
Arnold W. Schumann ◽  
Nathan S. Boyd

AbstractWeed interference during crop establishment is a serious concern for Florida strawberry [Fragaria×ananassa(Weston) Duchesne ex Rozier (pro sp.) [chiloensis×virginiana]] producers. In situ remote detection for precision herbicide application reduces both the risk of crop injury and herbicide inputs. Carolina geranium (Geranium carolinianumL.) is a widespread broadleaf weed within Florida strawberry production with sensitivity to clopyralid, the only available POST broadleaf herbicide.Geranium carolinianumleaf structure is distinct from that of the strawberry plant, which makes it an ideal candidate for pattern recognition in digital images via convolutional neural networks (CNNs). The study objective was to assess the precision of three CNNs in detectingG. carolinianum. Images ofG. carolinianumgrowing in competition with strawberry were gathered at four sites in Hillsborough County, FL. Three CNNs were compared, including object detection–based DetectNet, image classification–based VGGNet, and GoogLeNet. Two DetectNet networks were trained to detect either leaves or canopies ofG. carolinianum. Image classification using GoogLeNet and VGGNet was largely unsuccessful during validation with whole images (Fscore<0.02). CNN training using cropped images increasedG. carolinianumdetection during validation for VGGNet (Fscore=0.77) and GoogLeNet (Fscore=0.62). TheG. carolinianumleaf–trained DetectNet achieved the highestFscore(0.94) for plant detection during validation. Leaf-based detection led to more consistent detection ofG. carolinianumwithin the strawberry canopy and reduced recall-related errors encountered in canopy-based training. The smaller target of leaf-based DetectNet did increase false positives, but such errors can be overcome with additional training images for network desensitization training. DetectNet was the most viable CNN tested for image-based remote sensing ofG. carolinianumin competition with strawberry. Future research will identify the optimal approach for in situ detection and integrate the detection technology with a precision sprayer.


Author(s):  
Glen Williams ◽  
Nicholas A. Meisel ◽  
Timothy W. Simpson ◽  
Christopher McComb

Abstract The widespread growth of additive manufacturing, a field with a complex informatic “digital thread”, has helped fuel the creation of design repositories, where multiple users can upload distribute, and download a variety of candidate designs for a variety of situations. Additionally, advancements in additive manufacturing process development, design frameworks, and simulation are increasing what is possible to fabricate with AM, further growing the richness of such repositories. Machine learning offers new opportunities to combine these design repository components’ rich geometric data with their associated process and performance data to train predictive models capable of automatically assessing build metrics related to AM part manufacturability. Although design repositories that can be used to train these machine learning constructs are expanding, our understanding of what makes a particular design repository useful as a machine learning training dataset is minimal. In this study we use a metamodel to predict the extent to which individual design repositories can train accurate convolutional neural networks. To facilitate the creation and refinement of this metamodel, we constructed a large artificial design repository, and subsequently split it into sub-repositories. We then analyzed metadata regarding the size, complexity, and diversity of the sub-repositories for use as independent variables predicting accuracy and the required training computational effort for training convolutional neural networks. The networks each predict one of three additive manufacturing build metrics: (1) part mass, (2) support material mass, and (3) build time. Our results suggest that metamodels predicting the convolutional neural network coefficient of determination, as opposed to computational effort, were most accurate. Moreover, the size of a design repository, the average complexity of its constituent designs, and the average and spread of design spatial diversity were the best predictors of convolutional neural network accuracy.


Sign in / Sign up

Export Citation Format

Share Document