scholarly journals Evaluation of Commonly Used Algorithms for Thyroid Ultrasound Images Segmentation and Improvement Using Machine Learning Approaches

2018 ◽  
Vol 2018 ◽  
pp. 1-13 ◽  
Author(s):  
Prabal Poudel ◽  
Alfredo Illanes ◽  
Debdoot Sheet ◽  
Michael Friebe

The thyroid is one of the largest endocrine glands in the human body, which is involved in several body mechanisms like controlling protein synthesis and the body's sensitivity to other hormones and use of energy sources. Hence, it is of prime importance to track the shape and size of thyroid over time in order to evaluate its state. Thyroid segmentation and volume computation are important tools that can be used for thyroid state tracking assessment. Most of the proposed approaches are not automatic and require long time to correctly segment the thyroid. In this work, we compare three different nonautomatic segmentation algorithms (i.e., active contours without edges, graph cut, and pixel-based classifier) in freehand three-dimensional ultrasound imaging in terms of accuracy, robustness, ease of use, level of human interaction required, and computation time. We figured out that these methods lack automation and machine intelligence and are not highly accurate. Hence, we implemented two machine learning approaches (i.e., random forest and convolutional neural network) to improve the accuracy of segmentation as well as provide automation. This comparative study intends to discuss and analyse the advantages and disadvantages of different algorithms. In the last step, the volume of the thyroid is computed using the segmentation results, and the performance analysis of all the algorithms is carried out by comparing the segmentation results with the ground truth.

Cancers ◽  
2021 ◽  
Vol 13 (11) ◽  
pp. 2764
Author(s):  
Xin Yu Liew ◽  
Nazia Hameed ◽  
Jeremie Clos

A computer-aided diagnosis (CAD) expert system is a powerful tool to efficiently assist a pathologist in achieving an early diagnosis of breast cancer. This process identifies the presence of cancer in breast tissue samples and the distinct type of cancer stages. In a standard CAD system, the main process involves image pre-processing, segmentation, feature extraction, feature selection, classification, and performance evaluation. In this review paper, we reviewed the existing state-of-the-art machine learning approaches applied at each stage involving conventional methods and deep learning methods, the comparisons within methods, and we provide technical details with advantages and disadvantages. The aims are to investigate the impact of CAD systems using histopathology images, investigate deep learning methods that outperform conventional methods, and provide a summary for future researchers to analyse and improve the existing techniques used. Lastly, we will discuss the research gaps of existing machine learning approaches for implementation and propose future direction guidelines for upcoming researchers.


Author(s):  
Derya Yiltas-Kaplan

This chapter focuses on the process of the machine learning with considering the architecture of software-defined networks (SDNs) and their security mechanisms. In general, machine learning has been studied widely in traditional network problems, but recently there have been a limited number of studies in the literature that connect SDN security and machine learning approaches. The main reason of this situation is that the structure of SDN has emerged newly and become different from the traditional networks. These structural variances are also summarized and compared in this chapter. After the main properties of the network architectures, several intrusion detection studies on SDN are introduced and analyzed according to their advantages and disadvantages. Upon this schedule, this chapter also aims to be the first organized guide that presents the referenced studies on the SDN security and artificial intelligence together.


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 544 ◽  
Author(s):  
Emre Ozfatura ◽  
Sennur Ulukus ◽  
Deniz Gündüz

When gradient descent (GD) is scaled to many parallel workers for large-scale machine learning applications, its per-iteration computation time is limited by straggling workers. Straggling workers can be tolerated by assigning redundant computations and/or coding across data and computations, but in most existing schemes, each non-straggling worker transmits one message per iteration to the parameter server (PS) after completing all its computations. Imposing such a limitation results in two drawbacks: over-computation due to inaccurate prediction of the straggling behavior, and under-utilization due to discarding partial computations carried out by stragglers. To overcome these drawbacks, we consider multi-message communication (MMC) by allowing multiple computations to be conveyed from each worker per iteration, and propose novel straggler avoidance techniques for both coded computation and coded communication with MMC. We analyze how the proposed designs can be employed efficiently to seek a balance between the computation and communication latency. Furthermore, we identify the advantages and disadvantages of these designs in different settings through extensive simulations, both model-based and real implementation on Amazon EC2 servers, and demonstrate that proposed schemes with MMC can help improve upon existing straggler avoidance schemes.


2020 ◽  
Vol 77 (4) ◽  
pp. 1267-1273
Author(s):  
Cigdem Beyan ◽  
Howard I Browman

Abstract Machine learning, a subfield of artificial intelligence, offers various methods that can be applied in marine science. It supports data-driven learning, which can result in automated decision making of de novo data. It has significant advantages compared with manual analyses that are labour intensive and require considerable time. Machine learning approaches have great potential to improve the quality and extent of marine research by identifying latent patterns and hidden trends, particularly in large datasets that are intractable using other approaches. New sensor technology supports collection of large amounts of data from the marine environment. The rapidly developing machine learning subfield known as deep learning—which applies algorithms (artificial neural networks) inspired by the structure and function of the brain—is able to solve very complex problems by processing big datasets in a short time, sometimes achieving better performance than human experts. Given the opportunities that machine learning can provide, its integration into marine science and marine resource management is inevitable. The purpose of this themed set of articles is to provide as wide a selection as possible of case studies that demonstrate the applications, utility, and promise of machine learning in marine science. We also provide a forward-look by envisioning a marine science of the future into which machine learning has been fully incorporated.


Sensors ◽  
2021 ◽  
Vol 21 (22) ◽  
pp. 7527
Author(s):  
Mugdim Bublin

Distributed Acoustic Sensing (DAS) is a promising new technology for pipeline monitoring and protection. However, a big challenge is distinguishing between relevant events, like intrusion by an excavator near the pipeline, and interference, like land machines. This paper investigates whether it is possible to achieve adequate detection accuracy with classic machine learning algorithms using simulations and real system implementation. Then, we compare classical machine learning with a deep learning approach and analyze the advantages and disadvantages of both approaches. Although acceptable performance can be achieved with both approaches, preliminary results show that deep learning is the more promising approach, eliminating the need for laborious feature extraction and offering a six times lower event detection delay and twelve times lower execution time. However, we achieved the best results by combining deep learning with the knowledge-based and classical machine learning approaches. At the end of this manuscript, we propose general guidelines for efficient system design combining knowledge-based, classical machine learning, and deep learning approaches.


2020 ◽  
Vol 8 (6) ◽  
pp. 4496-4500

Skin cancer is typically growth and spread of cells or lesion on the uppermost part or layer of skin known as the epidermis. It is one of rarest and deadliest found type of cancer, if undetected or untreated at early stages may lead in patient’s demise. Dermatologists use dermatoscopic images to identify the type of skin cancer by identification of asymmetry, border, colour, texture & size mole or a lesion. This method of detection can also be applied using machine learning techniques for classification these images into respective of cancer. There have been various studies and techniques which have been proposed various researchers across the globe in order to improve the classification of these dermatoscopic images. The proposed studies primarily focus on classification of dermatoscopic images based on lesion’s colour and texture features followed by intelligent machine learning approaches. Advances in these machine intelligent approaches such as deep neural networks and convolutional neural networks can be applied on dermatoscopic images to identify their features. A CNN based approach provides a additional accuracy over feature extraction as the algorithm is applied on pixel in overall image size. CNN also provides the ability to perform huge chunk of mathematical operations which is basic requirement in case of image processing and machine learning. The CNN based algorithm can be used to classify the dermatoscopic images with better efficiency and overall accuracy with having power of artificial-neural-network.


Author(s):  
Pracheta J. Raut ◽  
Prof. Avantika Mahadik

Today the digital data that world produces is unseen and spectacular. The data from social media, e-commerce and Internet of things generate approximately 2.5 quintillion of bytes per day. This amount is equals 100 million Blu-ray discs or almost 30,000 GB per second. Till today data is growing and will continue to grow in future. In the field of Health care industry, big data has opened new ways to acquire intelligence and data analysis. Collected records from patient, hospital, doctors, medical treatment is known as health care big data. Big data by machine learning are assembled and evaluates the large amount of data in health care. Analytic process and business intelligence (BI) is growing up day by day, as it acquires knowledge and makes right decision. As it is vast and complex growing data, it is very difficult to store. The tradition method of handling big data is incapable to manage and process big data. Hence to resolve this difficulty, some machine learning tools are applied on large amount of data using big data analytics framework. Researchers have proposed some machine learning approaches to improve the accuracy of analytics. Each technique is applied, and their results are compared. And this concluded that we get accurate result from one machine learning approach are called as Ensemble Learning. The final result observed that ensemble learning can obtain high accuracy. In this paper we shall study about various methods to process big data for machine learning and its statistic approaches. Further we study various tools for storing of big data, its advantages, and disadvantages in the field of health care industry.


Author(s):  
Maryam Bagherian ◽  
Elyas Sabeti ◽  
Kai Wang ◽  
Maureen A Sartor ◽  
Zaneta Nikolovska-Coleska ◽  
...  

Abstract The task of predicting the interactions between drugs and targets plays a key role in the process of drug discovery. There is a need to develop novel and efficient prediction approaches in order to avoid costly and laborious yet not-always-deterministic experiments to determine drug–target interactions (DTIs) by experiments alone. These approaches should be capable of identifying the potential DTIs in a timely manner. In this article, we describe the data required for the task of DTI prediction followed by a comprehensive catalog consisting of machine learning methods and databases, which have been proposed and utilized to predict DTIs. The advantages and disadvantages of each set of methods are also briefly discussed. Lastly, the challenges one may face in prediction of DTI using machine learning approaches are highlighted and we conclude by shedding some lights on important future research directions.


2021 ◽  
Vol 16 (10) ◽  
pp. 186-188
Author(s):  
A. Saran Kumar ◽  
R. Rekha

Drug-Drug interaction (DDI) refers to change in the reaction of a drug when the person consumes other drug. It is the main cause of avertable bad drug reactions causing major issues on the patient’s health and the information systems. Many computational techniques have been used to predict the adverse effects of drug-drug interactions. However, these methods do not provide adequate information required for the prediction of DDI. Machine learning algorithms provide a set of methods which can increase the accuracy and success rate for well-defined issues with abundant data. This study provides a comprehensive survey on most popular machine learning and deep learning algorithms used by the researchers to predict DDI. In addition, the advantages and disadvantages of various machine learning approaches have also been discussed here.


2020 ◽  
Vol 7 ◽  
Author(s):  
Kuncheng Song ◽  
Fred A. Wright ◽  
Yi-Hui Zhou

Microbiome composition profiles generated from 16S rRNA sequencing have been extensively studied for their usefulness in phenotype trait prediction, including for complex diseases such as diabetes and obesity. These microbiome compositions have typically been quantified in the form of Operational Taxonomic Unit (OTU) count matrices. However, alternate approaches such as Amplicon Sequence Variants (ASV) have been used, as well as the direct use of k-mer sequence counts. The overall effect of these different types of predictors when used in concert with various machine learning methods has been difficult to assess, due to varied combinations described in the literature. Here we provide an in-depth investigation of more than 1,000 combinations of these three clustering/counting methods, in combination with varied choices for normalization and filtering, grouping at various taxonomic levels, and the use of more than ten commonly used machine learning methods for phenotype prediction. The use of short k-mers, which have computational advantages and conceptual simplicity, is shown to be effective as a source for microbiome-based prediction. Among machine-learning approaches, tree-based methods show consistent, though modest, advantages in prediction accuracy. We describe the various advantages and disadvantages of combinations in analysis approaches, and provide general observations to serve as a useful guide for future trait-prediction explorations using microbiome data.


Sign in / Sign up

Export Citation Format

Share Document