Machine learning-assisted optimization of TBBPA-bis-(2,3-dibromopropyl ether) extraction process from ABS polymer

Chemosphere ◽  
2021 ◽  
pp. 132128
Author(s):  
Yan Wan ◽  
Qiang Zeng ◽  
Pujiang Shi ◽  
Yong-Jin Yoon ◽  
Chor Yong Tay ◽  
...  
2021 ◽  
Vol 11 (13) ◽  
pp. 5826
Author(s):  
Evangelos Axiotis ◽  
Andreas Kontogiannis ◽  
Eleftherios Kalpoutzakis ◽  
George Giannakopoulos

Ethnopharmacology experts face several challenges when identifying and retrieving documents and resources related to their scientific focus. The volume of sources that need to be monitored, the variety of formats utilized, and the different quality of language use across sources present some of what we call “big data” challenges in the analysis of this data. This study aims to understand if and how experts can be supported effectively through intelligent tools in the task of ethnopharmacological literature research. To this end, we utilize a real case study of ethnopharmacology research aimed at the southern Balkans and the coastal zone of Asia Minor. Thus, we propose a methodology for more efficient research in ethnopharmacology. Our work follows an “expert–apprentice” paradigm in an automatic URL extraction process, through crawling, where the apprentice is a machine learning (ML) algorithm, utilizing a combination of active learning (AL) and reinforcement learning (RL), and the expert is the human researcher. ML-powered research improved the effectiveness and efficiency of the domain expert by 3.1 and 5.14 times, respectively, fetching a total number of 420 relevant ethnopharmacological documents in only 7 h versus an estimated 36 h of human-expert effort. Therefore, utilizing artificial intelligence (AI) tools to support the researcher can boost the efficiency and effectiveness of the identification and retrieval of appropriate documents.


Author(s):  
Shubham Shitole

Prediction of the Respiratory diseases in the earlier stage can be very useful specially to improve the survival rate of that patient. CT scan images are used to detect various lung diseases .These CT scan reports are sent to pathologists for further process. Pathologists analyze CT scan report and predict the infected tissues which are the main cause of the particular disease. This is lengthy process and to avoid this steps and increase the accuracy of the prediction Machine learning plays an important role . The system proposes to build "Predictive Diagnostic System" of infectious lung by using the concept of image processing in conjunction with machine learning. Proposed system will detect the disease from CT scan images and use preprocessing technique that will remove the noise and disturbance in image. Feature extraction process is applied to extract the useful features of underlying image, and feature selection technique will further optimize the top ranking features. CNN algorithm is then applied to classify the images for detection of Respiratory disease. After detection of disease, report will be generated and submitted to patient.


Symmetry ◽  
2019 ◽  
Vol 11 (2) ◽  
pp. 256 ◽  
Author(s):  
Jiangyong An ◽  
Wanyi Li ◽  
Maosong Li ◽  
Sanrong Cui ◽  
Huanran Yue

Drought stress seriously affects crop growth, development, and grain production. Existing machine learning methods have achieved great progress in drought stress detection and diagnosis. However, such methods are based on a hand-crafted feature extraction process, and the accuracy has much room to improve. In this paper, we propose the use of a deep convolutional neural network (DCNN) to identify and classify maize drought stress. Field drought stress experiments were conducted in 2014. The experiment was divided into three treatments: optimum moisture, light drought, and moderate drought stress. Maize images were obtained every two hours throughout the whole day by digital cameras. In order to compare the accuracy of DCNN, a comparative experiment was conducted using traditional machine learning on the same dataset. The experimental results demonstrated an impressive performance of the proposed method. For the total dataset, the accuracy of the identification and classification of drought stress was 98.14% and 95.95%, respectively. High accuracy was also achieved on the sub-datasets of the seedling and jointing stages. The identification and classification accuracy levels of the color images were higher than those of the gray images. Furthermore, the comparison experiments on the same dataset demonstrated that DCNN achieved a better performance than the traditional machine learning method (Gradient Boosting Decision Tree GBDT). Overall, our proposed deep learning-based approach is a very promising method for field maize drought identification and classification based on digital images.


2016 ◽  
Vol 14 (6) ◽  
pp. 377 ◽  
Author(s):  
Rungsun Kiatpanont, MS ◽  
Uthai Tanlamai, PhD ◽  
Prabhas Chongstitvatana, PhD

Natural disasters cause enormous damage to countries all over the world. To deal with these common problems, different activities are required for disaster management at each phase of the crisis. There are three groups of activities as follows: (1) make sense of the situation and determine how best to deal with it, (2) deploy the necessary resources, and (3) harmonize as many parties as possible, using the most effective communication channels. Current technological improvements and developments now enable people to act as real-time information sources. As a result, inundation with crowdsourced data poses a real challenge for a disaster manager. The problem is how to extract the valuable information from a gigantic data pool in the shortest possible time so that the information is still useful and actionable. This research proposed an actionable-data-extraction process to deal with the challenge. Twitter was selected as a test case because messages posted on Twitter are publicly available. Hashtag, an easy and very efficient technique, was also used to differentiate information.A quantitative approach to extract useful information from the tweets was supported and verified by interviews with disaster managers from many leading organizations in Thailand to understand their missions. The information classifications extracted from the collected tweets were first performed manually, and then the tweets were used to train a machine learning algorithm to classify future tweets. One particularly useful, significant, and primary section was the request for help category. The support vector machine algorithm was used to validate the results from the extraction process of 13,696 sample tweets, with over 74 percent accuracy. The results confirmed that the machine learning technique could significantly and practically assist with disaster management by dealing with crowdsourced data.


Electrocardiogram (ECG) is the analysis of the electrical movement of the heart over a period of time. The detailed information about the condition of the heart is measured by analyzing the ECG signal. Wavelet transform, fast Fourier transform are the different methods to disorganize cardiac disease. The paper elaborates the survey on ECG signal analysis and related study on arrhythmic and non arrhythmic data. Here we discuss the efficient feature extraction process for electrocardiogram, where based on position and priority six best P-QRS-T fragments are studied. This survey examines the the outcome of the system by using various Machine learning classification algorithms for feature extraction and analysis of ECG Signals. Support Vector Machine (SVM), K-Nearest Neighbor (KNN), Artificial Neural Network (ANN) are the most important algorithms used here for this purpose. There are several publicly available data sets which are used for arrhythmia analysis and among them MIT-BIH ECG-ID database is mostly used. The drawbacks and limitations are also discussed here and from there future challenges and concluding remarks can be done.


Author(s):  
Anindita Das Bhattacharjee

Accessibility problem is relevant for audiovisual information, where enormous data has to be explored and processed. Most of the solutions for this specific type of problems point towards a regular need of extracting applicable information features for a given content domain. And feature extraction process deals with two complicated tasks first deciding and then extracting. There are certain properties expected from good features-Repeatability, Distinctiveness, Locality, Quantity, Accuracy, Efficiency, and Invariance. Different feature extraction techniques are described. The chapter concentrates of taking a survey on the topic of Feature extraction and Image formation. Here both image and video are considered to have their feature extracted. In machine learning, pattern recognition and in image processing has significant contribution. The feature extraction is one of the common mechanisms involved in these two techniques. Extracting feature initiates from an initial data set of measured data and constructs derived informative values which are non redundant in nature.


2020 ◽  
Vol 10 (19) ◽  
pp. 6896
Author(s):  
Paloma Tirado-Martin ◽  
Judith Liu-Jimenez ◽  
Jorge Sanchez-Casanova ◽  
Raul Sanchez-Reillo

Currently, machine learning techniques are successfully applied in biometrics and Electrocardiogram (ECG) biometrics specifically. However, not many works deal with different physiological states in the user, which can provide significant heart rate variations, being these a key matter when working with ECG biometrics. Techniques in machine learning simplify the feature extraction process, where sometimes it can be reduced to a fixed segmentation. The applied database includes visits taken in two different days and three different conditions (sitting down, standing up after exercise), which is not common in current public databases. These characteristics allow studying differences among users under different scenarios, which may affect the pattern in the acquired data. Multilayer Perceptron (MLP) is used as a classifier to form a baseline, as it has a simple structure that has provided good results in the state-of-the-art. This work studies its behavior in ECG verification by using QRS complexes, finding its best hyperparameter configuration through tuning. The final performance is calculated considering different visits for enrolling and verification. Differentiation in the QRS complexes is also tested, as it is already required for detection, proving that applying a simple first differentiation gives a good result in comparison to state-of-the-art similar works. Moreover, it also improves the computational cost by avoiding complex transformations and using only one type of signal. When applying different numbers of complexes, the best results are obtained when 100 and 187 complexes in enrolment, obtaining Equal Error Rates (EER) that range between 2.79–4.95% and 2.69–4.71%, respectively.


Healthcare ◽  
2020 ◽  
Vol 8 (1) ◽  
pp. 34
Author(s):  
Sabyasachi Chakraborty ◽  
Satyabrata Aich ◽  
Hee-Cheol Kim

Parkinson’s disease is caused due to the progressive loss of dopaminergic neurons in the substantia nigra pars compacta (SNc). Presently, with the exponential growth of the aging population across the world the number of people being affected by the disease is also increasing and it imposes a huge economic burden on the governments. However, to date, no therapy or treatment has been found that can completely eradicate the disease. Therefore, early detection of Parkinson’s disease is very important so that the progressive loss of dopaminergic neurons can be controlled to provide the patients with a better life. In this study, 3T T1-MRI scans were collected from 906 subjects, out of which, 203 are control subjects, 66 are prodromal subjects and 637 are Parkinson’s disease patients. To analyze the MRI scans for the detection of neurodegeneration and Parkinson’s disease, eight subcortical structures were segmented from the acquired MRI scans using atlas based segmentation. Further, on the extracted eight subcortical structures, feature extraction was performed to extract textural, morphological and statistical features, respectively. After the feature extraction process, an exhaustive set of 107 features were generated for each MRI scan. Therefore, a two-level feature extraction process was implemented for finding the best possible feature set for the detection of Parkinson’s disease. The two-level feature extraction procedure leveraged correlation analysis and recursive feature elimination, which at the end provided us with 20 best performing features out of the extracted 107 features. Further, all the features were trained using machine learning algorithms and a comparative analysis was performed between four different machine learning algorithms based on the selected performance metrics. And at the end, it was observed that artificial neural network (multi-layer perceptron) performed the best by providing an overall accuracy of 95.3%, overall recall of 95.41%, overall precision of 97.28% and f1-score of 94%, respectively.


2020 ◽  
Vol 26 (1) ◽  
pp. 1-9
Author(s):  
Aditya Kakde ◽  
Nitin Arora ◽  
Durgansh Sharma ◽  
Subhash Chander Sharma

AbstractAccording to the Google I/O 2018 key notes, in future artificial intelligence, which also includes machine learning and deep learning, will mostly evolve in healthcare domain. As there are lots of subdomains which come under the category of healthcare domain, the proposed paper concentrates on one such domain, that is breast cancer and pneumonia. Today, just classifying the diseases is not enough. The system should also be able to classify a particular patient’s disease. Thus, this paper shines the light on the importance of multi spectral classification which means the collection of several monochrome images of the same scene. It can be proved to be an important process in the healthcare areas to know if a patient is suffering from a specific disease or not. The convolutional layer followed by the pooling layer is used for the feature extraction process and for the classification process; fully connected layers followed by the regression layer are used.


Sign in / Sign up

Export Citation Format

Share Document