scholarly journals Automatic Generalization of Residential Areas Based on “Paradigm” Theory and Big Data

2021 ◽  
Author(s):  
Tianlin Duo ◽  
Peng Zhang

“Paradigm” theory is an important ideological and practical tool for scientific research. The research means and methods of Geographic Information Science follow the laws of four paradigms. Automatic cartographic generalization is not only the key link of map making, but also a recognized difficult and hot issue. Based on large-scale map data and deep learning technology, an automatic cartographic generalization problem-solving model is proposed in this paper. According to the key and difficult problems faced by residential area selection and simplification, residential area selection models and simplification models based on big data and deep learning are constructed respectively, which provides new ideas and schemes to solve the key and difficult problems of residential area selection and simplification.

2020 ◽  
Author(s):  
Anusha Ampavathi ◽  
Vijaya Saradhi T

UNSTRUCTURED Big data and its approaches are generally helpful for healthcare and biomedical sectors for predicting the disease. For trivial symptoms, the difficulty is to meet the doctors at any time in the hospital. Thus, big data provides essential data regarding the diseases on the basis of the patient’s symptoms. For several medical organizations, disease prediction is important for making the best feasible health care decisions. Conversely, the conventional medical care model offers input as structured that requires more accurate and consistent prediction. This paper is planned to develop the multi-disease prediction using the improvised deep learning concept. Here, the different datasets pertain to “Diabetes, Hepatitis, lung cancer, liver tumor, heart disease, Parkinson’s disease, and Alzheimer’s disease”, from the benchmark UCI repository is gathered for conducting the experiment. The proposed model involves three phases (a) Data normalization (b) Weighted normalized feature extraction, and (c) prediction. Initially, the dataset is normalized in order to make the attribute's range at a certain level. Further, weighted feature extraction is performed, in which a weight function is multiplied with each attribute value for making large scale deviation. Here, the weight function is optimized using the combination of two meta-heuristic algorithms termed as Jaya Algorithm-based Multi-Verse Optimization algorithm (JA-MVO). The optimally extracted features are subjected to the hybrid deep learning algorithms like “Deep Belief Network (DBN) and Recurrent Neural Network (RNN)”. As a modification to hybrid deep learning architecture, the weight of both DBN and RNN is optimized using the same hybrid optimization algorithm. Further, the comparative evaluation of the proposed prediction over the existing models certifies its effectiveness through various performance measures.


Big data is large-scale data collected for knowledge discovery, it has been widely used in various applications. Big data often has image data from the various applications and requires effective technique to process data. In this paper, survey has been done in the big image data researches to analysis the effective performance of the methods. Deep learning techniques provides the effective performance compared to other methods included wavelet based methods. The deep learning techniques has the problem of requiring more computational time, and this can be overcome by lightweight methods.


2018 ◽  
Vol 29 (3) ◽  
pp. 67-88 ◽  
Author(s):  
Wen Zeng ◽  
Hongjiao Xu ◽  
Hui Li ◽  
Xiang Li

In the big data era, it is a great challenge to identify high-level abstract features out of a flood of sci-tech literature to achieve in-depth analysis of data. The deep learning technology has developed rapidly and achieved applications in many fields, but has rarely been utilized in the research of sci-tech literature data. This article introduced the presentation method of vector space of terminologies in sci-tech literature based on the deep learning model. It explored and adopted a deep AE model to reduce the dimensionality of input word vector feature. Also put forward is the methodology of correlation analysis of sci-tech literature based on deep learning technology. The experimental results showed that the processing of sci-tech literature data could be simplified into the computation of vectors in the multi-dimensional vector space, and the similarity in vector space could be used to represent similarity in text semantics. The correlation analysis of subject contents between sci-tech literatures of the same or different types can be made using this method.


2019 ◽  
Vol 131 ◽  
pp. 01118
Author(s):  
Fan Tongke

Aiming at the problem of disease diagnosis of large-scale crops, this paper combines machine vision and deep learning technology to propose an algorithm for constructing disease recognition by LM_BP neural network. The images of multiple crop leaves are collected, and the collected pictures are cut by image cutting technology, and the data are obtained by the color distance feature extraction method. The data are input into the disease recognition model, the feature weights are set, and the model is repeatedly trained to obtain accurate results. In this model, the research on corn disease shows that the model is simple and easy to implement, and the data are highly reliable.


2018 ◽  
Author(s):  
Anisha Keshavan ◽  
Jason D. Yeatman ◽  
Ariel Rokem

AbstractResearch in many fields has become increasingly reliant on large and complex datasets. “Big Data” holds untold promise to rapidly advance science by tackling new questions that cannot be answered with smaller datasets. While powerful, research with Big Data poses unique challenges, as many standard lab protocols rely on experts examining each one of the samples. This is not feasible for large-scale datasets because manual approaches are time-consuming and hence difficult to scale. Meanwhile, automated approaches lack the accuracy of examination by highly trained scientists and this may introduce major errors, sources of noise, and unforeseen biases into these large and complex datasets. Our proposed solution is to 1) start with a small, expertly labelled dataset, 2) amplify labels through web-based tools that engage citizen scientists, and 3) train machine learning on amplified labels to emulate expert decision making. As a proof of concept, we developed a system to quality control a large dataset of three-dimensional magnetic resonance images (MRI) of human brains. An initial dataset of 200 brain images labeled by experts were amplified by citizen scientists to label 722 brains, with over 80,000 ratings done through a simple web interface. A deep learning algorithm was then trained to predict data quality, based on a combination of the citizen scientist labels that accounts for differences in the quality of classification by different citizen scientists. In an ROC analysis (on left out test data), the deep learning network performed as well as a state-of-the-art, specialized algorithm (MRIQC) for quality control of T1-weighted images, each with an area under the curve of 0.99. Finally, as a specific practical application of the method, we explore how brain image quality relates to the replicability of a well established relationship between brain volume and age over development. Combining citizen science and deep learning can generalize and scale expert decision making; this is particularly important in emerging disciplines where specialized, automated tools do not already exist.


2020 ◽  
Vol 03 (04) ◽  
pp. 7-13
Author(s):  
Elcin Nizami Huseyn ◽  

Medical imaging technology plays an important role in the detection, diagnosis and treatment of diseases. Due to the instability of human expert experience, machine learning technology is expected to assist researchers and physicians to improve the accuracy of imaging diagnosis and reduce the imbalance of medical resources. This article systematically summarizes some methods of deep learning technology, introduces the application research of deep learning technology in medical imaging, and discusses the limitations of deep learning technology in medical imaging. Key words: Artificial Intelligence, Deep Learning, Medical Imaging, big data


2020 ◽  
pp. 1524-1546
Author(s):  
Wen Zeng ◽  
Hongjiao Xu ◽  
Hui Li ◽  
Xiang Li

In the big data era, it is a great challenge to identify high-level abstract features out of a flood of sci-tech literature to achieve in-depth analysis of data. The deep learning technology has developed rapidly and achieved applications in many fields, but has rarely been utilized in the research of sci-tech literature data. This article introduced the presentation method of vector space of terminologies in sci-tech literature based on the deep learning model. It explored and adopted a deep AE model to reduce the dimensionality of input word vector feature. Also put forward is the methodology of correlation analysis of sci-tech literature based on deep learning technology. The experimental results showed that the processing of sci-tech literature data could be simplified into the computation of vectors in the multi-dimensional vector space, and the similarity in vector space could be used to represent similarity in text semantics. The correlation analysis of subject contents between sci-tech literatures of the same or different types can be made using this method.


2020 ◽  
Vol 10 (7) ◽  
pp. 2361
Author(s):  
Fan Yang ◽  
Wenjin Zhang ◽  
Laifa Tao ◽  
Jian Ma

As we enter the era of big data, we have to face big data generated by industrial systems that are massive, diverse, high-speed, and variability. In order to effectively deal with big data possessing these characteristics, deep learning technology has been widely used. However, the existing methods require great human involvement that is heavily depend on domain expertise and may thus be non-representative and biased from task to similar task, so for a wide variety of prognostic and health management (PHM) tasks, how to apply the developed deep learning algorithms to similar tasks to reduce the amount of development and data collection costs has become an urgent problem. Based on the idea of transfer learning and the structures of deep learning PHM algorithms, this paper proposes two transfer strategies via transferring different elements of deep learning PHM algorithms, analyzes the possible transfer scenarios in practical application, and proposes transfer strategies applicable in each scenario. At the end of this paper, the deep learning algorithm of bearing fault diagnosis based on convolutional neural networks (CNN) is transferred based on the proposed method, which was carried out under different working conditions and for different objects, respectively. The experiments verify the value and effectiveness of the proposed method and give the best choice of transfer strategy.


Sign in / Sign up

Export Citation Format

Share Document