automatic data
Recently Published Documents


TOTAL DOCUMENTS

1255
(FIVE YEARS 294)

H-INDEX

33
(FIVE YEARS 7)

Author(s):  
Juliana L. Paes ◽  
Vinícius de A. Ramos ◽  
Marcus V. M. de Oliveira ◽  
Marinaldo F. Pinto ◽  
Thais A. de P. Lovisi ◽  
...  

ABSTRACT Increasing the efficiency of solar dryers with ensuring that the system remains accessible to all users can be achieved with their automation through low-cost and easy-to-use technique sensors. The objective was to develop, implement and evaluate an automatic system for monitoring drying parameters in a hybrid solar-electric dryer (HSED). Initially, an automated data acquisition system for collecting the parameters of sample mass, air temperature, and relative air humidity was developed and installed. The automatic mass data acquisition system was calibrated in the hybrid solar-electric dryer. The automated system was validated by comparing it with conventional devices for measuring the parameters under study. The data obtained were subjected to analysis of variance, Tukey test and linear regression at p ≤ 0.05. The system to turn on/off the exhaust worked efficiently, helping to reduce the errors related to the mass measurement. The GERAR Mobile App showed easy to be used since it has intuitive icons and compatibility with the most used operating systems for mobile devices. The responses in communication via Bluetooth were fast. The use of Arduino, a low-cost microcontroller, to automate the monitoring activity allowed estimating the mass of the product and collecting the drying air temperature and relative air humidity data through the DHT22. This sensor showed a good correlation of mass and air temperature readings between the automatic and conventional system, but low correlation for relative air humidity. In general, the automatic data acquisition system monitored in real time the parameters for drying agricultural products in the HSED.


Author(s):  
Mathias Artus ◽  
Mohamed Alabassy ◽  
Christian Koch

Current bridge inspection practices rely on paper-based data acquisition, digitization, and multiple conversions in between incompatible formats to facilitate data exchange. This practice is time-consuming, error-prone, cumbersome, and leads to information loss. One aim for future inspection procedures is to have a fully digitized workflow that achieves loss-free data exchange, which lowers costs and offers higher efficiency. On the one hand, existing studies proposed methods to automatize data acquisition and visualization for inspections. These studies lack an open standard to make the gathered data available for other processes. On the other hand, several studies discuss data structures for exchanging damage information through out different stakeholders. However, those studies do not cover the process of automatic data acquisition and transfer. This study focused on a framework that incorporates automatic damage data acquisition, transfer, and a damage information model for data exchange. This enables inspectors to use damage data for subsequent analyses and simulations. The proposed framework shows the potentials for a comprehensive damage information model and related (semi-)automatic data acquisition and processing.


2021 ◽  
Vol 11 (23) ◽  
pp. 11429
Author(s):  
Jurgen van den Hoogen ◽  
Stefan Bloemheuvel ◽  
Martin Atzmueller

With the developments in improved computation power and the vast amount of (automatic) data collection, industry has become more data-driven. These data-driven approaches for monitoring processes and machinery require different modeling methods focusing on automated learning and deployment. In this context, deep learning provides possibilities for industrial diagnostics to achieve improved performance and efficiency. These deep learning applications can be used to automatically extract features during training, eliminating time-consuming feature engineering and prior understanding of sophisticated (signal) processing techniques. This paper extends on previous work, introducing one-dimensional (1D) CNN architectures that utilize an adaptive wide-kernel layer to improve classification of multivariate signals, e.g., time series classification in fault detection and condition monitoring context. We used multiple prominent benchmark datasets for rolling bearing fault detection to determine the performance of the proposed wide-kernel CNN architectures in different settings. For example, distinctive experimental conditions were tested with deviating amounts of training data. We shed light on the performance of these models compared to traditional machine learning applications and explain different approaches to handle multivariate signals with deep learning. Our proposed models show promising results for classifying different fault conditions of rolling bearing elements and their respective machine condition, while using a fairly straightforward 1D CNN architecture with minimal data preprocessing. Thus, using a 1D CNN with an adaptive wide-kernel layer seems well-suited for fault detection and condition monitoring. In addition, this paper clearly indicates the high potential performance of deep learning compared to traditional machine learning, particularly in complex multivariate and multi-class classification tasks.


2021 ◽  
Author(s):  
Xiaoxia Shang ◽  
Holger Baars ◽  
Iwona S. Stachlewska ◽  
Ina Mattis ◽  
Mika Komppula

Abstract. Lidar observations were analysed to characterize atmospheric pollen at four EARLINET (European Aerosol Research Lidar Network) stations (Hohenpeißenberg, Germany; Kuopio, Finland, Leipzig, Germany; and Warsaw, Poland) during the ACTRIS-COVID-19 campaign in May 2020. The re-analysis lidar data products, after the centralized and automatic data processing with the Single Calculus Chain (SCC), were used in this study, focusing on particle backscatter coefficients at 355 nm and 532 nm, and particle linear depolarization ratios (PDRs) at 532 nm. A novel method for the characterization of the pure pollen depolarization ratio was presented, based on the non-linear least square regression fitting using lidar-derived backscatter-related Ångström exponents (BAEs) and PDRs. Under the assumption that the BAE between 355 and 532 nm should be zero (± 0.5) for pure pollen, the pollen depolarization ratios were estimated: for Kuopio and Warsaw stations, the pollen depolarization ratios at 532 nm were of 0.24 (0.19–0.28) during the birch dominant pollen periods; whereas for Hohenpeiβenberg and Leipzig stations, the pollen depolarization ratios of 0.21 (0.15–0.27) and 0.20 (0.15–0.25) were observed for periods of mixture of birch and grass pollen. The method was also applied for the aerosol classification, using two case examples from the campaign periods: the different pollen types (or pollen mixtures) were identified at Warsaw station, and dust and pollen were classified at Hohenpeißenberg station.


2021 ◽  
Vol 60 (S 02) ◽  
pp. e111-e119
Author(s):  
Linyi Li ◽  
Adela Grando ◽  
Abeed Sarker

Abstract Background Value sets are lists of terms (e.g., opioid medication names) and their corresponding codes from standard clinical vocabularies (e.g., RxNorm) created with the intent of supporting health information exchange and research. Value sets are manually-created and often exhibit errors. Objectives The aim of the study is to develop a semi-automatic, data-centric natural language processing (NLP) method to assess medication-related value set correctness and evaluate it on a set of opioid medication value sets. Methods We developed an NLP algorithm that utilizes value sets containing mostly true positives and true negatives to learn lexical patterns associated with the true positives, and then employs these patterns to identify potential errors in unseen value sets. We evaluated the algorithm on a set of opioid medication value sets, using the recall, precision and F1-score metrics. We applied the trained model to assess the correctness of unseen opioid value sets based on recall. To replicate the application of the algorithm in real-world settings, a domain expert manually conducted error analysis to identify potential system and value set errors. Results Thirty-eight value sets were retrieved from the Value Set Authority Center, and six (two opioid, four non-opioid) were used to develop and evaluate the system. Average precision, recall, and F1-score were 0.932, 0.904, and 0.909, respectively on uncorrected value sets; and 0.958, 0.953, and 0.953, respectively after manual correction of the same value sets. On 20 unseen opioid value sets, the algorithm obtained average recall of 0.89. Error analyses revealed that the main sources of system misclassifications were differences in how opioids were coded in the value sets—while the training value sets had generic names mostly, some of the unseen value sets had new trade names and ingredients. Conclusion The proposed approach is data-centric, reusable, customizable, and not resource intensive. It may help domain experts to easily validate value sets.


2021 ◽  
Vol 2132 (1) ◽  
pp. 012003
Author(s):  
Song He ◽  
Hao Xue ◽  
Lejiang Guo ◽  
Xin Chen ◽  
Jun Hu

Abstract ABSTRACT.In order to visualize the applications of deep learning based intelligent vehicle in the real field vividly, especially in the unmanned cases in which it realizes the integration of various technologies such as automatic data acquisition, data model construction, automatic curve detection, traffic signs recognition, verification of the unmanned driving, etc. A M-typed Model intelligent vehicle that is embedded with a high-performance board from Baidu named Edge Board is adopted by this study. The vehicle is trained under the PaddlePaddle deep learning frame and Baidu AI Studio Develop platform. Through the autonomous control scheme design and the non-stop study on the deep learning algorithm, an intelligent vehicle model based on PaddlePaddle deep learning is here. The vehicle has the function of automatic driving on the simulated track. In addition, it can distinguish several traffic signs and make feedbacks accordingly.


Author(s):  
C. W. Chidiebere ◽  
C. E. Duru ◽  
J. P. C. Mbagwu

Molecular orbitals are vital to giving reasons several chemical reactions occur. Although, Fukui and coworkers were able to propose a postulate which shows that highest occupied molecular orbital (HOMO) and lowest unoccupied molecular orbital (LUMO) is incredibly important in predicting chemical reactions. It should be kept in mind that this postulate could be a rigorous one therefore it requires an awfully serious attention in order to be understood. However, there has been an excellent breakthrough since the introduction of computational chemistry which is mostly used when a mathematical method is fully well built that it is automated for effectuation and intrinsically can predict chemical reactivity. At the cause of this review, we’ve reported on how HOMO and LUMO molecular orbitals may be employed in predicting a chemical change by the utilization of an automatic data processing (ADP) system through the utilization of quantum physics approximations.


Sign in / Sign up

Export Citation Format

Share Document