Machine Learning to Perform Segmentation and 3D Projection of Abnormal Tissues by Endoscopy Images

2020 ◽  
Vol 17 (5) ◽  
pp. 2296-2303
Author(s):  
V. Adithya Pothan Raj ◽  
P. Mohan Kumar

Images obtained by endoscopy technique provides the normal direction of the tissue contour. This provides the important anatomical parameters which can be used for segmentation algorithms. Due to the variation of tissue image sizes, the values of intensity for the tissues is typically ununiformed and also have noisiness by nature. So, identifying the direction in normal by a single iteration is unreliable. A multi (factor)-iteration algorithm has been developed for estimating the direction normal to the edge of defective tissue. From experimented results, the estimation reliability is formulated by multiple iterations. The estimation post last iteration corrects the direction normally. We have obtained the balance at all points during the normal direction estimation and it is used by the Edge Detector. The implementation results obtained prove that our proposed algorithm reduces the amount of astonishing boundaries and gapes in the actual outlines. Thus improves the quality of segmentation and 3D projection. The obtained corrected output could also be used in the removal of false edges in post processing. The performance outcome of our proposed algorithm is measured at multiple iterations and results are tabulated.

Author(s):  
S. Hensel ◽  
S. Goebbels ◽  
M. Kada

Abstract. A challenge in data-based 3D building reconstruction is to find the exact edges of roof facet polygons. Although these edges are visible in orthoimages, convolution-based edge detectors also find many other edges due to shadows and textures. In this feasibility study, we apply machine learning to solve this problem. Recently, neural networks have been introduced that not only detect edges in images, but also assemble the edges into a graph. When applied to roof reconstruction, the vertices of the dual graph represent the roof facets. In this study, we apply the Point-Pair Graph Network (PPGNet) to orthoimages of buildings and evaluate the quality of the detected edge graphs. The initial results are promising, and adjusting the training parameters further improved the results. However, in some cases, additional work, such as post-processing, is required to reliably find all vertices.


Author(s):  
Feidu Akmel ◽  
Ermiyas Birihanu ◽  
Bahir Siraj

Software systems are any software product or applications that support business domains such as Manufacturing,Aviation, Health care, insurance and so on.Software quality is a means of measuring how software is designed and how well the software conforms to that design. Some of the variables that we are looking for software quality are Correctness, Product quality, Scalability, Completeness and Absence of bugs, However the quality standard that was used from one organization is different from other for this reason it is better to apply the software metrics to measure the quality of software. Attributes that we gathered from source code through software metrics can be an input for software defect predictor. Software defect are an error that are introduced by software developer and stakeholders. Finally, in this study we discovered the application of machine learning on software defect that we gathered from the previous research works.


2020 ◽  
Author(s):  
Saeed Nosratabadi ◽  
Amir Mosavi ◽  
Puhong Duan ◽  
Pedram Ghamisi ◽  
Ferdinand Filip ◽  
...  

This paper provides a state-of-the-art investigation of advances in data science in emerging economic applications. The analysis was performed on novel data science methods in four individual classes of deep learning models, hybrid deep learning models, hybrid machine learning, and ensemble models. Application domains include a wide and diverse range of economics research from the stock market, marketing, and e-commerce to corporate banking and cryptocurrency. Prisma method, a systematic literature review methodology, was used to ensure the quality of the survey. The findings reveal that the trends follow the advancement of hybrid models, which, based on the accuracy metric, outperform other learning algorithms. It is further expected that the trends will converge toward the advancements of sophisticated hybrid deep learning models.


2020 ◽  
Vol 20 (9) ◽  
pp. 720-730
Author(s):  
Iker Montes-Bageneta ◽  
Urtzi Akesolo ◽  
Sara López ◽  
Maria Merino ◽  
Eneritz Anakabe ◽  
...  

Aims: Computational modelling may help us to detect the more important factors governing this process in order to optimize it. Background: The generation of hazardous organic waste in teaching and research laboratories poses a big problem that universities have to manage. Methods: In this work, we report on the experimental measurement of waste generation on the chemical education laboratories within our department. We measured the waste generated in the teaching laboratories of the Organic Chemistry Department II (UPV/EHU), in the second semester of the 2017/2018 academic year. Likewise, to know the anthropogenic and social factors related to the generation of waste, a questionnaire has been utilized. We focused on all students of Experimentation in Organic Chemistry (EOC) and Organic Chemistry II (OC2) subjects. It helped us to know their prior knowledge about waste, awareness of the problem of separate organic waste and the correct use of the containers. These results, together with the volumetric data, have been analyzed with statistical analysis software. We obtained two Perturbation-Theory Machine Learning (PTML) models including chemical, operational, and academic factors. The dataset analyzed included 6050 cases of laboratory practices vs. practices of reference. Results: These models predict the values of acetone waste with R2 = 0.88 and non-halogenated waste with R2 = 0.91. Conclusion: This work opens a new gate to the implementation of more sustainable techniques and a circular economy with the aim of improving the quality of university education processes.


Entropy ◽  
2019 ◽  
Vol 21 (5) ◽  
pp. 513 ◽  
Author(s):  
Héctor D. Menéndez ◽  
José Luis Llorente

The quality of anti-virus software relies on simple patterns extracted from binary files. Although these patterns have proven to work on detecting the specifics of software, they are extremely sensitive to concealment strategies, such as polymorphism or metamorphism. These limitations also make anti-virus software predictable, creating a security breach. Any black hat with enough information about the anti-virus behaviour can make its own copy of the software, without any access to the original implementation or database. In this work, we show how this is indeed possible by combining entropy patterns with classification algorithms. Our results, applied to 57 different anti-virus engines, show that we can mimic their behaviour with an accuracy close to 98% in the best case and 75% in the worst, applied on Windows’ disk resident malware.


2021 ◽  
Vol 40 (5) ◽  
pp. 9361-9382 ◽  
Author(s):  
Naeem Iqbal ◽  
Rashid Ahmad ◽  
Faisal Jamil ◽  
Do-Hyeun Kim

Quality prediction plays an essential role in the business outcome of the product. Due to the business interest of the concept, it has extensively been studied in the last few years. Advancement in machine learning (ML) techniques and with the advent of robust and sophisticated ML algorithms, it is required to analyze the factors influencing the success of the movies. This paper presents a hybrid features prediction model based on pre-released and social media data features using multiple ML techniques to predict the quality of the pre-released movies for effective business resource planning. This study aims to integrate pre-released and social media data features to form a hybrid features-based movie quality prediction (MQP) model. The proposed model comprises of two different experimental models; (i) predict movies quality using the original set of features and (ii) develop a subset of features based on principle component analysis technique to predict movies success class. This work employ and implement different ML-based classification models, such as Decision Tree (DT), Support Vector Machines with the linear and quadratic kernel (L-SVM and Q-SVM), Logistic Regression (LR), Bagged Tree (BT) and Boosted Tree (BOT), to predict the quality of the movies. Different performance measures are utilized to evaluate the performance of the proposed ML-based classification models, such as Accuracy (AC), Precision (PR), Recall (RE), and F-Measure (FM). The experimental results reveal that BT and BOT classifiers performed accurately and produced high accuracy compared to other classifiers, such as DT, LR, LSVM, and Q-SVM. The BT and BOT classifiers achieved an accuracy of 90.1% and 89.7%, which shows an efficiency of the proposed MQP model compared to other state-of-art- techniques. The proposed work is also compared with existing prediction models, and experimental results indicate that the proposed MQP model performed slightly better compared to other models. The experimental results will help the movies industry to formulate business resources effectively, such as investment, number of screens, and release date planning, etc.


Polymers ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 353
Author(s):  
Kun-Cheng Ke ◽  
Ming-Shyan Huang

Conventional methods for assessing the quality of components mass produced using injection molding are expensive and time-consuming or involve imprecise statistical process control parameters. A suitable alternative would be to employ machine learning to classify the quality of parts by using quality indices and quality grading. In this study, we used a multilayer perceptron (MLP) neural network along with a few quality indices to accurately predict the quality of “qualified” and “unqualified” geometric shapes of a finished product. These quality indices, which exhibited a strong correlation with part quality, were extracted from pressure curves and input into the MLP model for learning and prediction. By filtering outliers from the input data and converting the measured quality into quality grades used as output data, we increased the prediction accuracy of the MLP model and classified the quality of finished parts into various quality levels. The MLP model may misjudge datapoints in the “to-be-confirmed” area, which is located between the “qualified” and “unqualified” areas. We classified the “to-be-confirmed” area, and only the quality of products in this area were evaluated further, which reduced the cost of quality control considerably. An integrated circuit tray was manufactured to experimentally demonstrate the feasibility of the proposed method.


2021 ◽  
Vol 48 (4) ◽  
pp. 41-44
Author(s):  
Dena Markudova ◽  
Martino Trevisan ◽  
Paolo Garza ◽  
Michela Meo ◽  
Maurizio M. Munafo ◽  
...  

With the spread of broadband Internet, Real-Time Communication (RTC) platforms have become increasingly popular and have transformed the way people communicate. Thus, it is fundamental that the network adopts traffic management policies that ensure appropriate Quality of Experience to users of RTC applications. A key step for this is the identification of the applications behind RTC traffic, which in turn allows to allocate adequate resources and make decisions based on the specific application's requirements. In this paper, we introduce a machine learning-based system for identifying the traffic of RTC applications. It builds on the domains contacted before starting a call and leverages techniques from Natural Language Processing (NLP) to build meaningful features. Our system works in real-time and is robust to the peculiarities of the RTP implementations of different applications, since it uses only control traffic. Experimental results show that our approach classifies 5 well-known meeting applications with an F1 score of 0.89.


2021 ◽  
Vol 11 (13) ◽  
pp. 5826
Author(s):  
Evangelos Axiotis ◽  
Andreas Kontogiannis ◽  
Eleftherios Kalpoutzakis ◽  
George Giannakopoulos

Ethnopharmacology experts face several challenges when identifying and retrieving documents and resources related to their scientific focus. The volume of sources that need to be monitored, the variety of formats utilized, and the different quality of language use across sources present some of what we call “big data” challenges in the analysis of this data. This study aims to understand if and how experts can be supported effectively through intelligent tools in the task of ethnopharmacological literature research. To this end, we utilize a real case study of ethnopharmacology research aimed at the southern Balkans and the coastal zone of Asia Minor. Thus, we propose a methodology for more efficient research in ethnopharmacology. Our work follows an “expert–apprentice” paradigm in an automatic URL extraction process, through crawling, where the apprentice is a machine learning (ML) algorithm, utilizing a combination of active learning (AL) and reinforcement learning (RL), and the expert is the human researcher. ML-powered research improved the effectiveness and efficiency of the domain expert by 3.1 and 5.14 times, respectively, fetching a total number of 420 relevant ethnopharmacological documents in only 7 h versus an estimated 36 h of human-expert effort. Therefore, utilizing artificial intelligence (AI) tools to support the researcher can boost the efficiency and effectiveness of the identification and retrieval of appropriate documents.


2017 ◽  
Vol 3 (1) ◽  
pp. 7-10 ◽  
Author(s):  
Jan Kuschan ◽  
Henning Schmidt ◽  
Jörg Krüger

Abstract:This paper presents an analysis of two distinct human lifting movements regarding acceleration and angular velocity. For the first movement, the ergonomic one, the test persons produced the lifting power by squatting down, bending at the hips and knees only. Whereas performing the unergonomic one they bent forward lifting the box mainly with their backs. The measurements were taken by using a vest equipped with five Inertial Measurement Units (IMU) with 9 Dimensions of Freedom (DOF) each. In the following the IMU data captured for these two movements will be evaluated using statistics and visualized. It will also be discussed with respect to their suitability as features for further machine learning classifications. The reason for observing these movements is that occupational diseases of the musculoskeletal system lead to a reduction of the workers’ quality of life and extra costs for companies. Therefore, a vest, called CareJack, was designed to give the worker a real-time feedback about his ergonomic state while working. The CareJack is an approach to reduce the risk of spinal and back diseases. This paper will also present the idea behind it as well as its main components.


Sign in / Sign up

Export Citation Format

Share Document