scholarly journals Design of a Skin Cancer Diagnosing Web Application Based on Convolutional Neural Network Model and Chatterbot Application Programming Interface

2021 ◽  
Vol 2078 (1) ◽  
pp. 012039
Author(s):  
Qi An

Abstract Skin cancer has become a great concern for people's wellness. With the popularization of machine learning, a considerable amount of data about skin cancer has been created. However, applications on the market featuring skin cancer diagnosis have barely utilized the data. In this paper, we have designed a web application to diagnose skin cancer with the CNN model and Chatterbot API. First, the application allows the user to upload an image of the user's skin. Next, a CNN model is trained with a huge amount of pre-taken images to make predictions about whether the skin is affected by skin cancer, and if the answer is yes, which kind of skin cancer the uploaded image can be classified. Last, a chatbot using the Chatterbot API is trained with hundreds of answers and questions asked and answered on the internet to interact with and give feedback to the user based on the information provided by the CNN model. The application has achieved significant performance in making classifications and has acquired the ability to interact with users. The CNN model has reached an accuracy of 0.95 in making classifications, and the chatbot can answer more than 100 questions about skin cancer. We have also done a great job on connecting the backend based on the CNN model as well as the Chatterbot API and the frontend based on the VUE Javascript framework.

Author(s):  
Raul Sierra-Alcocer ◽  
Christopher Stephens ◽  
Juan Barrios ◽  
Constantino González‐Salazar ◽  
Juan Carlos Salazar Carrillo ◽  
...  

SPECIES (Stephens et al. 2019) is a tool to explore spatial correlations in biodiversity occurrence databases. The main idea behind the SPECIES project is that the geographical correlations between the distributions of taxa records have useful information. The problem, however, is that if we have thousands of species (Mexico's National System of Biodiversity Information has records of around 70,000 species) then we have millions of potential associations, and exploring them is far from easy. Our goal with SPECIES is to facilitate the discovery and application of meaningful relations hiding in our data. The main variables in SPECIES are the geographical distributions of species occurrence records. Other types of variables, like the climatic variables from WorldClim (Hijmans et al. 2005), are explanatory data that serve for modeling. The system offers two modes of analysis. In one, the user defines a target species, and a selection of species and abiotic variables; then the system computes the spatial correlations between the target species and each of the other species and abiotic variables. The request from the user can be as small as comparing one species to another, or as large as comparing one species to all the species in the database. A user may wonder, for example, which species are usual neighbors of the jaguar, this mode could help answer this question. The second mode of analysis gives a network perspective, in it, the user defines two groups of taxa (and/or environmental variables), the output in this case is a correlation network where the weight of a link between two nodes represents the spatial correlation between the variables that the nodes represent. For example, one group of taxa could be hummingbirds (Trochilidae family) and the second flowers of the Lamiaceae family. This output would help the user analyze which pairs of hummingbird and flower are highly correlated in the database. SPECIES data architecture is optimized to support fast hypotheses prototyping and testing with the analysis of thousands of biotic and abiotic variables. It has a visualization web interface that presents descriptive results to the user at different levels of detail. The methodology in SPECIES is relatively simple, it partitions the geographical space with a regular grid and treats a species occurrence distribution as a present/not present boolean variable over the cells. Given two species (or one species and one abiotic variable) it measures if the number of co-occurrences between the two is more (or less) than expected. If it is more than expected indicates a signal of a positive relation, whereas if it is less it would be evidence of disjoint distributions. SPECIES provides an open web application programming interface (API) to request the computation of correlations and statistical dependencies between variables in the database. Users can create applications that consume this 'statistical web service' or use it directly to further analyze the results in frameworks like R or Python. The project includes an interactive web application that does exactly that: requests analysis from the web service and lets the user experiment and visually explore the results. We believe this approach can be used on one side to augment the services provided from data repositories; and on the other side, facilitate the creation of specialized applications that are clients of these services. This scheme supports big-data-driven research for a wide range of backgrounds because end users do not need to have the technical know-how nor the infrastructure to handle large databases. Currently, SPECIES hosts: all records from Mexico's National Biodiversity Information System (CONABIO 2018) and a subset of Global Biodiversity Information Facility data that covers the contiguous USA (GBIF.org 2018b) and Colombia (GBIF.org 2018a). It also includes discretizations of environmental variables from WorldClim, from the Environmental Rasters for Ecological Modeling project (Title and Bemmels 2018), from CliMond (Kriticos et al. 2012), and topographic variables (USGS EROS Center 1997b, USGS EROS Center 1997a). The long term plan, however, is to incrementally include more data, specially all data from the Global Biodiversity Information Facility. The code of the project is open source, and the repositories are available online (Front-end, Web Services Application Programming Interface, Database Building scripts). This presentation is a demonstration of SPECIES' functionality and its overall design.


2018 ◽  
Vol 21 (2) ◽  
pp. 34-38
Author(s):  
Awal Kurniawan ◽  
Intan Sari Areni ◽  
Andani Achmad

Teknologi web sudah mengalami banyak kemajuan. Dimulai dari era web 1.0 yang masih bersifat statis hingga teknologi web yang mampu mengatasi permasalah perangkat keras seperti storage, speech recognition, hingga geolocation. Salah satu teknologi web yang hadir saat ini adalah progressive web application. Penelitian ini bertujuan untuk merancang sebuah sistem yang dapat melakukan proses caching file pada konten website. Sistem menggunakan progressive web application dengan memanfaatkan service worker. Sumber data yang akan dijadikan objek pada penelitian ini adalah data keluhan masyarakat yang berbentuk JSON. Pada penelitian ini digunakan metode eksperimental dalam merancang aplikasi. Data keluhan yang bersumber dari sebuah API (Application Programming Interface) kemudian ditampilkan dalam keadaan jaringan aktif. Selama dalam keadaan jaringan aktif, service worker melakukan tugasnya dalam melakukan proses caching. Setelah itu , data yang sudah disimpan bisa diakses pada jaringan tidak aktif. Hasil dari penelitian ini adalah sistem keluhan yang disisipkan service worker mampu melakukan proses caching data hingga 500 data keluhan. Meskipun eksekusi waktu yang dibutuhkan dalam mengakses aplikasi lebih lama karena pemasangan service worker, namun aplikasi yang diakses lebih cepat ketika dalam keadaan offline karena data dimuat dalam cache service worker.


2021 ◽  
Vol 7 (2) ◽  
pp. 108-118
Author(s):  
Erwin Yudi Hidayat ◽  
Raindy Wicaksana Hardiansyah ◽  
Affandy Affandy

Dalam menaikkan kinerja serta mengevaluasi kualitas, perusahaan publik membutuhkan feedback dari masyarakat / konsumen yang bisa didapat melalui media sosial. Sebagai pengguna media sosial Twitter terbesar ketiga di dunia, tweet yang beredar di Indonesia memiliki potensi meningkatkan reputasi dan citra perusahaan. Dengan memanfaatkan algoritma Deep Neural Network (DNN), neural network yang tersusun dari layer yang jumlahnya lebih dari satu, didapati hasil analisa sentimen pada Twitter berbahasa Indonesia menjadi lebih baik dibanding dengan metode lainnya. Penelitian ini menganalisa sentimen melalui tweet dari masyarakat Indonesia terhadap sejumlah perusahaan publik dengan menggunakan DNN. Data Tweet sebanyak 5504 record didapat dengan melakukan crawling melalui Application Programming Interface (API) Twitter yang selanjutnya dilakukan preprocessing (cleansing, case folding, formalisasi, stemming, dan tokenisasi). Proses labeling dilakukan untuk 3902 record dengan memanfaatkan aplikasi Sentiment Strength Detection. Tahap pelatihan model dilakukan menggunakan algoritma DNN dengan variasi jumlah hidden layer, susunan node, dan nilai learning rate. Eksperimen dengan proporsi data training dan testing sebesar 90:10 memberikan hasil performa terbaik. Model tersusun dengan 3 hidden layer dengan susunan node tiap layer pada model tersebut yaitu 128, 256, 128 node dan menggunakan learning rate sebesar 0.005, model mampu menghasilkan nilai akurasi mencapai 88.72%. 


Author(s):  
Ricardo Santos ◽  
Ivo Pereira ◽  
Isabel Azevedo

Detailed documentation and software tests are key factors for the success of a web application programming interface (API). When designing an API, especially in a design first approach, it is relevant to define a formal contract, known as API specification. This document must contain all necessary information regarding the API behavior. Thereby, the specification can be used to dynamically generate API components like documentation, client and server code, and software tests, reducing development and maintenance costs. This chapter presents a study of OpenAPI specification and its application on designing a new RESTful API for E-goi. It also presents a set of solutions for generating documentation, client code libraries, and test cases.


2018 ◽  
Vol 3 (2) ◽  
pp. 361 ◽  
Author(s):  
Prokhorov I.V. ◽  
Kochetkov O.T. ◽  
Filatov A.A.

The article deals with questions of studies, development and practical use in teaching complex laboratory work on extracting and analyzing big data to train specialists in the specialty 10.05.04 "Information and Analytical Security Systems", direction of training "Information sSecurity of Financial and Economic Structures" in the framework of the educational discipline "Distributed Automated Information Systems". Keywords: big data, data scientist, extraction, processing and analysis of big data, information security of financial and economic structures, the Internet, Yandex, Google, application programming interface –API.


Author(s):  
Uwe Zdun

This chapter examines the use of patterns for reengineering legacy systems to the Web. Today reengineering existing (legacy) systems to the Web is a typical software maintenance task. In such projects developers integrate a Web representation with the legacy system’s application programming interface (API) and its responses. Often, the same information is provided to other channels than HTTP and in other formats than HTML as well, and the old (legacy) interfaces are still supported. Add-on services such as security or logging are required. Performance and scalability of the Web application might be crucial. To resolve these issues, many different concepts and frameworks have to be well understood, especially legacy system wrapping, connection handling, remoting, service abstraction, adaptation techniques, dynamic content generation, and others. In this chapter, we present patterns from different sources that resolve these issues. We integrate them to a pattern language operating in the context of reengineering to the Web, and present pattern variants and examples in this context.


2017 ◽  
Vol 4 (2) ◽  
pp. 112-119
Author(s):  
Hadi Pranoto ◽  
Eko Budi Setiawan

Android has been updating the system of every version it releases. The addition of Application Programming Interface (API) is done every time Google releases a new Android operating system. The availability of APIs for third-party applications provides opportunities for developers to be able to monitor Android smartphones. Just like Google Device Manager which can instruct Android smartphones over the internet network, however, it still has a deficiency that is if the target smartphone in a state of inactive internet. In this research, the author utilizes SMS media to be able to process instructions and access the system API for monitoring purposes. The results of this research is by using SMS then user can instruct Android smartphone to take photos, get current location, ring, delete smartphone files, set screen protection, and backup contact with higher messaging reliability. This application can run well on Android Lollipop 5.1 (API Level 22) or above because it has enough API to support system functionality.


2014 ◽  
Vol 518 ◽  
pp. 305-309
Author(s):  
Wen Tao Liu

Offline storage technology has many uses in the Web application and it can store the user status, cache data, temporary data, and persistent data and so on. In this paper several typical web client storage technologies are discussed and it includes the IE browser's unique storage technology UserData, localStorage and sessionStorage of HTML5, Web SQL Databases, Indexed Database, as well as classic storage technology Cookie and so on. Their concrete using methods are explained and their individual strengths and differences are compared. Their respective applications occasions and some issues that need attention are discussed. The general cross-browser offline storage method is presented and it can use the same application programming interface to complete different browser offline storage technologies.


2021 ◽  
Author(s):  
Sowmiya K ◽  
Supriya S ◽  
R. Subhashini

Analysis of structured data has seen tremendous success in the past. However, large scale of an unstructured data have been analysed in the form of video format remains a challenging area. YouTube, a Google company, has over a billion users and it used to generate billions of views. Since YouTube data is getting created in a very huge amount with a lot of views and with an equally great speed, there is a huge demand to store the data, process the data and carefully study the data in this large amount of it usable. The project utilizes the YouTube Data API (Application Programming Interface) which allows the applications or websites to incorporate functions in which they are used by YouTube application to fetch and view the information. The Google Developers Console which is used to generate an unique access key which is further required to fetch the data from YouTube public channel. Process the data and finally data stored in AWS. This project extracts the meaningful output of which can be used by the management for analysis, these methodologies helpful for business intelligence.


Sign in / Sign up

Export Citation Format

Share Document