Adaptive Algorithms for Intelligent Geometric Computing

2012 ◽  
pp. 97-104
Author(s):  
M. L. Gavrilova

This chapter spans topics from such important areas as Artificial Intelligence, Computational Geometry and Biometric Technologies. The primary focus is on the proposed Adaptive Computation Paradigm and its applications to surface modeling and biometric processing. Availability of much more affordable storage and high resolution image capturing devices have contributed significantly over the past few years to accumulating very large datasets of collected data (such as GIS maps, biometric samples, videos etc.). On the other hand, it also created significant challenges driven by the higher than ever volumes and the complexity of the data, that can no longer be resolved through acquisition of more memory, faster processors or optimization of existing algorithms. These developments justified the need for radically new concepts for massive data storage, processing and visualization. To address this need, the current chapter presents the original methodology based on the paradigm of the Adaptive Geometric Computing. The methodology enables storing complex data in a compact form, providing efficient access to it, preserving high level of details and visualizing dynamic changes in a smooth and continuous manner. The first part of the chapter discusses adaptive algorithms in real-time visualization, specifically in GIS (Geographic Information Systems) applications. Data structures such as Real-time Optimally Adaptive Mesh (ROAM) and Progressive Mesh (PM) are briefly surveyed. The adaptive method Adaptive Spatial Memory (ASM), developed by R. Apu and M. Gavrilova, is then introduced. This method allows fast and efficient visualization of complex data sets representing terrains, landscapes and Digital Elevation Models (DEM). Its advantages are briefly discussed. The second part of the chapter presents application of adaptive computation paradigm and evolutionary computing to missile simulation. As a result, patterns of complex behavior can be developed and analyzed. The final part of the chapter marries a concept of adaptive computation and topology-based techniques and discusses their application to challenging area of biometric computing.

Author(s):  
M. L. Gavrilova

This chapter spans topics from such important areas as Artificial Intelligence, Computational Geometry and Biometric Technologies. The primary focus is on the proposed Adaptive Computation Paradigm and its applications to surface modeling and biometric processing. Availability of much more affordable storage and high resolution image capturing devices have contributed significantly over the past few years to accumulating very large datasets of collected data (such as GIS maps, biometric samples, videos etc.). On the other hand, it also created significant challenges driven by the higher than ever volumes and the complexity of the data, that can no longer be resolved through acquisition of more memory, faster processors or optimization of existing algorithms. These developments justified the need for radically new concepts for massive data storage, processing and visualization. To address this need, the current chapter presents the original methodology based on the paradigm of the Adaptive Geometric Computing. The methodology enables storing complex data in a compact form, providing efficient access to it, preserving high level of details and visualizing dynamic changes in a smooth and continuous manner. The first part of the chapter discusses adaptive algorithms in real-time visualization, specifically in GIS (Geographic Information Systems) applications. Data structures such as Real-time Optimally Adaptive Mesh (ROAM) and Progressive Mesh (PM) are briefly surveyed. The adaptive method Adaptive Spatial Memory (ASM), developed by R. Apu and M. Gavrilova, is then introduced. This method allows fast and efficient visualization of complex data sets representing terrains, landscapes and Digital Elevation Models (DEM). Its advantages are briefly discussed. The second part of the chapter presents application of adaptive computation paradigm and evolutionary computing to missile simulation. As a result, patterns of complex behavior can be developed and analyzed. The final part of the chapter marries a concept of adaptive computation and topology-based techniques and discusses their application to challenging area of biometric computing.


2018 ◽  
Vol 5 (1) ◽  
pp. 47-55
Author(s):  
Florensia Unggul Damayanti

Data mining help industries create intelligent decision on complex problems. Data mining algorithm can be applied to the data in order to forecasting, identity pattern, make rules and recommendations, analyze the sequence in complex data sets and retrieve fresh insights. Yet, increasing of technology and various techniques among data mining availability data give opportunity to industries to explore and gain valuable information from their data and use the information to support business decision making. This paper implement classification data mining in order to retrieve knowledge in customer databases to support marketing department while planning strategy for predict plan premium. The dataset decompose into conceptual analytic to identify characteristic data that can be used as input parameter of data mining model. Business decision and application is characterized by processing step, processing characteristic and processing outcome (Seng, J.L., Chen T.C. 2010). This paper set up experimental of data mining based on J48 and Random Forest classifiers and put a light on performance evaluation between J48 and random forest in the context of dataset in insurance industries. The experiment result are about classification accuracy and efficiency of J48 and Random Forest , also find out the most attribute that can be used to predict plan premium in context of strategic planning to support business strategy.


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4736
Author(s):  
Sk. Tanzir Mehedi ◽  
Adnan Anwar ◽  
Ziaur Rahman ◽  
Kawsar Ahmed

The Controller Area Network (CAN) bus works as an important protocol in the real-time In-Vehicle Network (IVN) systems for its simple, suitable, and robust architecture. The risk of IVN devices has still been insecure and vulnerable due to the complex data-intensive architectures which greatly increase the accessibility to unauthorized networks and the possibility of various types of cyberattacks. Therefore, the detection of cyberattacks in IVN devices has become a growing interest. With the rapid development of IVNs and evolving threat types, the traditional machine learning-based IDS has to update to cope with the security requirements of the current environment. Nowadays, the progression of deep learning, deep transfer learning, and its impactful outcome in several areas has guided as an effective solution for network intrusion detection. This manuscript proposes a deep transfer learning-based IDS model for IVN along with improved performance in comparison to several other existing models. The unique contributions include effective attribute selection which is best suited to identify malicious CAN messages and accurately detect the normal and abnormal activities, designing a deep transfer learning-based LeNet model, and evaluating considering real-world data. To this end, an extensive experimental performance evaluation has been conducted. The architecture along with empirical analyses shows that the proposed IDS greatly improves the detection accuracy over the mainstream machine learning, deep learning, and benchmark deep transfer learning models and has demonstrated better performance for real-time IVN security.


2021 ◽  
Vol 22 (5) ◽  
pp. 2659
Author(s):  
Gianluca Costamagna ◽  
Giacomo Pietro Comi ◽  
Stefania Corti

In the last decade, different research groups in the academic setting have developed induced pluripotent stem cell-based protocols to generate three-dimensional, multicellular, neural organoids. Their use to model brain biology, early neural development, and human diseases has provided new insights into the pathophysiology of neuropsychiatric and neurological disorders, including microcephaly, autism, Parkinson’s disease, and Alzheimer’s disease. However, the adoption of organoid technology for large-scale drug screening in the industry has been hampered by challenges with reproducibility, scalability, and translatability to human disease. Potential technical solutions to expand their use in drug discovery pipelines include Clustered Regularly Interspaced Short Palindromic Repeats (CRISPR) to create isogenic models, single-cell RNA sequencing to characterize the model at a cellular level, and machine learning to analyze complex data sets. In addition, high-content imaging, automated liquid handling, and standardized assays represent other valuable tools toward this goal. Though several open issues still hamper the full implementation of the organoid technology outside academia, rapid progress in this field will help to prompt its translation toward large-scale drug screening for neurological disorders.


2016 ◽  
Vol 12 (03) ◽  
pp. 64
Author(s):  
Haifeng Hu

Abstract—An online automatic disaster monitoring system can reduce or prevent geological mine disasters to protect life and property. Global Navigation Satellite System receivers and the GeoRobot are two kinds of in-situ geosensors widely used for monitoring ground movements near mines. A combined monitoring solution is presented that integrates the advantages of both. In addition, a geosensor network system to be used for geological mine disaster monitoring is described. A complete online automatic mine disaster monitoring system including data transmission, data management, and complex data analysis is outlined. This paper proposes a novel overall architecture for mine disaster monitoring. This architecture can seamlessly integrate sensors for long-term, remote, and near real-time monitoring. In the architecture, three layers are used to collect, manage and process observation data. To demonstrate the applicability of the method, a system encompassing this architecture has been deployed to monitor the safety and stability of a slope at an open-pit mine in Inner Mongolia.


2018 ◽  
Vol 14 (1) ◽  
pp. 30-50 ◽  
Author(s):  
William H. Money ◽  
Stephen J. Cohen

This article analyzes the properties of unknown faults in knowledge management and Big Data systems processing Big Data in real-time. These faults introduce risks and threaten the knowledge pyramid and decisions based on knowledge gleaned from volumes of complex data. The authors hypothesize that not yet encountered faults may require fault handling, an analytic model, and an architectural framework to assess and manage the faults and mitigate the risks of correlating or integrating otherwise uncorrelated Big Data, and to ensure the source pedigree, quality, set integrity, freshness, and validity of the data. New architectures, methods, and tools for handling and analyzing Big Data systems functioning in real-time will contribute to organizational knowledge and performance. System designs must mitigate faults resulting from real-time streaming processes while ensuring that variables such as synchronization, redundancy, and latency are addressed. This article concludes that with improved designs, real-time Big Data systems may continuously deliver the value of streaming Big Data.


Author(s):  
Abou_el_ela Abdou Hussein

Day by day advanced web technologies have led to tremendous growth amount of daily data generated volumes. This mountain of huge and spread data sets leads to phenomenon that called big data which is a collection of massive, heterogeneous, unstructured, enormous and complex data sets. Big Data life cycle could be represented as, Collecting (capture), storing, distribute, manipulating, interpreting, analyzing, investigate and visualizing big data. Traditional techniques as Relational Database Management System (RDBMS) couldn’t handle big data because it has its own limitations, so Advancement in computing architecture is required to handle both the data storage requisites and the weighty processing needed to analyze huge volumes and variety of data economically. There are many technologies manipulating a big data, one of them is hadoop. Hadoop could be understand as an open source spread data processing that is one of the prominent and well known solutions to overcome handling big data problem. Apache Hadoop was based on Google File System and Map Reduce programming paradigm. Through this paper we dived to search for all big data characteristics starting from first three V's that have been extended during time through researches to be more than fifty six V's and making comparisons between researchers to reach to best representation and the precise clarification of all big data V’s characteristics. We highlight the challenges that face big data processing and how to overcome these challenges using Hadoop and its use in processing big data sets as a solution for resolving various problems in a distributed cloud based environment. This paper mainly focuses on different components of hadoop like Hive, Pig, and Hbase, etc. Also we institutes absolute description of Hadoop Pros and cons and improvements to face hadoop problems by choosing proposed Cost-efficient Scheduler Algorithm for heterogeneous Hadoop system.


Author(s):  
Michael G. Mauk

Image capturing, processing, and analysis have numerous uses in solar cell research, device and process development and characterization, process control, and quality assurance and inspection. Solar cell image processing is expanding due to the increasing performance (resolution, sensitivity, spectral range) and low-cost of commercial CCD and infrared cameras. Methods and applications are discussed, with primary focus on monocrystalline and polycrystalline silicon solar cells using visible and infrared (thermography) wavelengths. The most prominent applications relate to mapping of minority carrier lifetime, shunts, and defects in solar cell wafers, in various stages of the manufacturing process. Other applications include measurements of surface texture and reflectivity, surface cleanliness, integrity of metallization lines, uniformity of coatings, and crystallographic texture and grain size. Image processing offers the capability to assess large-areas (> 100 cm2) with a non-contact, fast (~ 1 second), and modest cost. The challenge is to quantify and interpret the image data in order to better inform device design, process engineering, and quality control. Many promising solar cell technologies fail in the transition from laboratory to factory due to issues related to scale-up in area and manufacturing throughput. Image analysis provides an effective method to assess areal uniformity, device-to-device reproducibility, and defect densities. More integration of image analysis from research devices to field testing of modules will continue as the photovoltaics industry matures.


Author(s):  
Avinash Navlani ◽  
V. B. Gupta

In the last couple of decades, clustering has become a very crucial research problem in the data mining research community. Clustering refers to the partitioning of data objects such as records and documents into groups or clusters of similar characteristics. Clustering is unsupervised learning, because of unsupervised nature there is no unique solution for all problems. Most of the time complex data sets require explanation in multiple clustering sets. All the Traditional clustering approaches generate single clustering. There is more than one pattern in a dataset; each of patterns can be interesting in from different perspectives. Alternative clustering intends to find all unlike groupings of the data set such that each grouping has high quality and distinct from each other. This chapter gives you an overall view of alternative clustering; it's various approaches, related work, comparing with various confusing related terms like subspace, multi-view, and ensemble clustering, applications, issues, and challenges.


Author(s):  
Phillip L. Manning ◽  
Peter L. Falkingham

Dinosaurs successfully conjure images of lost worlds and forgotten lives. Our understanding of these iconic, extinct animals now comes from many disciplines, not just the science of palaeontology. In recent years palaeontology has benefited from the application of new and existing techniques from physics, biology, chemistry, engineering, but especially computational science. The application of computers in palaeontology is highlighted in this chapter as a key area of development in studying fossils. The advances in high performance computing (HPC) have greatly aided and abetted multiple disciplines and technologies that are now feeding paleontological research, especially when dealing with large and complex data sets. We also give examples of how such multidisciplinary research can be used to communicate not only specific discoveries in palaeontology, but also the methods and ideas, from interrelated disciplines to wider audiences. Dinosaurs represent a useful vehicle that can help enable wider public engagement, communicating complex science in digestible chunks.


Sign in / Sign up

Export Citation Format

Share Document