scholarly journals Machine Learning and Its Application in Skin Cancer

Author(s):  
Kinnor Das ◽  
Clay J. Cockerell ◽  
Anant Patil ◽  
Paweł Pietkiewicz ◽  
Mario Giulini ◽  
...  

Artificial intelligence (AI) has wide applications in healthcare, including dermatology. Machine learning (ML) is a subfield of AI involving statistical models and algorithms that can progressively learn from data to predict the characteristics of new samples and perform a desired task. Although it has a significant role in the detection of skin cancer, dermatology skill lags behind radiology in terms of AI acceptance. With continuous spread, use, and emerging technologies, AI is becoming more widely available even to the general population. AI can be of use for the early detection of skin cancer. For example, the use of deep convolutional neural networks can help to develop a system to evaluate images of the skin to diagnose skin cancer. Early detection is key for the effective treatment and better outcomes of skin cancer. Specialists can accurately diagnose the cancer, however, considering their limited numbers, there is a need to develop automated systems that can diagnose the disease efficiently to save lives and reduce health and financial burdens on the patients. ML can be of significant use in this regard. In this article, we discuss the fundamentals of ML and its potential in assisting the diagnosis of skin cancer.

Author(s):  
Pawan Sonawane ◽  
Sahel Shardhul ◽  
Raju Mendhe

The vast majority of skin cancer deaths are from melanoma, with about 1.04 million cases annually. Early detection of the same can be immensely helpful in order to try to cure it. But most of the diagnosis procedures are either extremely expensive or not available to a vast majority, as these centers are concentrated in urban regions only. Thus, there is a need for an application that can perform a quick, efficient, and low-cost diagnosis. Our solution proposes to build a server less mobile application on the AWS cloud that takes the images of potential skin tumors and classifies it as either Malignant or Benign. The classification would be carried out using a trained Convolution Neural Network model and Transfer learning (Inception v3). Several experiments will be performed based on Morphology and Color of the tumor to identify ideal parameters.


2021 ◽  
Vol 19 (3) ◽  
pp. 163
Author(s):  
Dušan Bogićević

Edge data processing represents the new evolution of the Internet and Cloud computing. Its application to the Internet of Things (IoT) is a step towards faster processing of information from sensors for better performance. In automated systems, we have a large number of sensors, whose information needs to be processed in the shortest possible time and acted upon. The paper describes the possibility of applying Artificial Intelligence on Edge devices using the example of finding a parking space for a vehicle, and directing it based on the segment the vehicle belongs to. Algorithm of Machine Learning is used for vehicle classification, which is based on vehicle dimensions.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 3072-3072
Author(s):  
Habte Aragaw Yimer ◽  
Wai Hong Wilson Tang ◽  
Mohan K. Tummala ◽  
Spencer Shao ◽  
Gina G. Chung ◽  
...  

3072 Background: The Circulating Cell-free Genome Atlas study (CCGA; NCT02889978) previously demonstrated that a blood-based multi-cancer early detection (MCED) test utilizing cell-free DNA (cfDNA) sequencing in combination with machine learning could detect cancer signals across multiple cancer types and predict cancer signal origin. Cancer classes were defined within the CCGA study for sensitivity reporting. Separately, cancer types defined by the American Joint Committee on Cancer (AJCC) criteria, which outline unique staging requirements and reflect a distinct combination of anatomic site, histology and other biologic features, were assigned to each cancer participant using the same source data for primary site of origin and histologic type. Here, we report CCGA ‘cancer class’ designation and AJCC ‘cancer type’ assignment within the third and final CCGA3 validation substudy to better characterize the diversity of tumors across which a cancer signal could be detected with the MCED test that is nearing clinical availability. Methods: CCGA is a prospective, multicenter, case-control, observational study with longitudinal follow-up (overall population N = 15,254). Plasma cfDNA from evaluable samples was analyzed using a targeted methylation bisulfite sequencing assay and a machine learning approach, and test performance, including sensitivity, was assessed. For sensitivity reporting, CCGA cancer classes were assigned to cancer participants using a combination of the type of primary cancer reported by the site and tumor characteristics abstracted from the site pathology reports by GRAIL pathologists. Each cancer participant also was separately assigned an AJCC cancer type based on the same source data using AJCC staging manual (8th edition) classifications. Results: A total of 4077 participants comprised the independent validation set with confirmed status (cancer: n = 2823; non-cancer: n = 1254 with non-cancer status confirmed at year-one follow-up). Sensitivity was reported for 24 cancer classes (sample sizes ranged from 10 to 524 participants), as well as an “other” cancer class (59 participants). According to AJCC classification, the MCED test was found to detect cancer signals across 50+ AJCC cancer types, including some types not present in the training set; some cancer types had limited representation. Conclusions: This MCED test that is nearing clinical availability and was evaluated in the third CCGA substudy detected cancer signals across 50+ AJCC cancer types. Reporting CCGA cancer classes and AJCC cancer types demonstrates the ability of the MCED test to detect cancer signals across a set of diverse cancer types representing a wide range of biologic characteristics, including cancer types that the classifier has not been trained on, and supports its use on a population-wide scale. Clinical trial information: NCT02889978.


Author(s):  
Jeremy Riel

Conversational agents, also known as chatbots, are automated systems for engaging in two-way dialogue with human users. These systems have existed in one form or another for at least 60 years but have recently demonstrated significant potential with advances in machine learning and artificial intelligence technologies. The use of conversational agents or chatbots for education can potentially reduce costs and supplement teacher instruction in transformative ways for formal learning. This chapter examines the design and status of chatbots and conversational agents for educational purposes. Common design functions and goals of educational chatbots are described, along with current practical applications of chatbots for educational purposes. Finally, this chapter considers issues about pedagogical commitments, ethics, and equity to suggest future work in the field.


Different mathematical models, Artificial Intelligence approach and Past recorded data set is combined to formulate Machine Learning. Machine Learning uses different learning algorithms for different types of data and has been classified into three types. The advantage of this learning is that it uses Artificial Neural Network and based on the error rates, it adjusts the weights to improve itself in further epochs. But, Machine Learning works well only when the features are defined accurately. Deciding which feature to select needs good domain knowledge which makes Machine Learning developer dependable. The lack of domain knowledge affects the performance. This dependency inspired the invention of Deep Learning. Deep Learning can detect features through self-training models and is able to give better results compared to using Artificial Intelligence or Machine Learning. It uses different functions like ReLU, Gradient Descend and Optimizers, which makes it the best thing available so far. To efficiently apply such optimizers, one should have the knowledge of mathematical computations and convolutions running behind the layers. It also uses different pooling layers to get the features. But these Modern Approaches need high level of computation which requires CPU and GPUs. In case, if, such high computational power, if hardware is not available then one can use Google Colaboratory framework. The Deep Learning Approach is proven to improve the skin cancer detection as demonstrated in this paper. The paper also aims to provide the circumstantial knowledge to the reader of various practices mentioned above.


Author(s):  
Yung Ming ◽  
Lily Yuan

Machine Learning (ML) and Artificial Intelligence (AI) methods are transforming many commercial and academic areas, including feature extraction, autonomous driving, computational linguistics, and voice recognition. These new technologies are now having a significant effect in radiography, forensics, and many other areas where the accessibility of automated systems may improve the precision and repeatability of essential job performance. In this systematic review, we begin by providing a short overview of the different methods that are currently being developed, with a particular emphasis on those utilized in biomedical studies.


2013 ◽  
Vol 29 (3) ◽  
pp. 170-181 ◽  
Author(s):  
Lois J. Loescher ◽  
Monika Janda ◽  
H. Peter Soyer ◽  
Kimberly Shea ◽  
Clara Curiel-Lewandrowski

2021 ◽  
Author(s):  
Markus Langer ◽  
Cornelius J. König ◽  
Caroline Back ◽  
Victoria Hemsing

Introducing automated systems based on artificial intelligence and machine learning for ethically sensitive decision tasks requires investigating of trust processes in relation to such tasks. In an example of such a task (personnel selection), this study investigates trustworthiness, trust, and reliance in light of a trust violation relating to ethical standards and a trust repair intervention. Specifically, participants evaluated applicant preselection outcomes by either a human or an automated system across twelve personnel selection tasks. We additionally varied information regarding imperfection of the human and automated system. In task rounds five through eight, the preselected applicants were predominantly male, thus constituting a trust violation due to a violation of ethical standards. Before task round nine, participants received an excuse for the biased preselection (i.e., a trust repair intervention). Results showed that participants initially perceived automated systems to be less trustworthy, and had less intention to trust automated systems. Specifically, participants perceived systems to be less able, and flexible, but also less biased – a result that was sustained even in light of unfair bias. Furthermore, in regard to the automated system the trust violation and the trust repair intervention had weaker effects. Those effects were partly stronger when highlighting imperfection for the automated system. We conclude that it is crucial to investigate trust processes in relation to automated systems in ethically sensitive domains such as personnel selection as insights from classical areas of automation might not translate to application contexts where ethical standards are central to trust processes.


Sign in / Sign up

Export Citation Format

Share Document