scholarly journals Application of artificial intelligence for Euler solutions clustering

Geophysics ◽  
2003 ◽  
Vol 68 (1) ◽  
pp. 168-180 ◽  
Author(s):  
Valentine Mikhailov ◽  
Armand Galdeano ◽  
Michel Diament ◽  
Alexei Gvishiani ◽  
Sergei Agayan ◽  
...  

Results of Euler deconvolution strongly depend on the selection of viable solutions. Synthetic calculations using multiple causative sources show that Euler solutions cluster in the vicinity of causative bodies even when they do not group densely about the perimeter of the bodies. We have developed a clustering technique to serve as a tool for selecting appropriate solutions. The clustering technique uses a methodology based on artificial intelligence, and it was originally designed to classify large data sets. It is based on a geometrical approach to study object concentration in a finite metric space of any dimension. The method uses a formal definition of cluster and includes free parameters that search for clusters of given properties. Tests on synthetic and real data showed that the clustering technique successfully outlines causative bodies more accurately than other methods used to discriminate Euler solutions. In complex field cases, such as the magnetic field in the Gulf of Saint Malo region (Brittany, France), the method provides dense clusters, which more clearly outline possible causative sources. In particular, it allows one to trace offshore the main inland tectonic structures and to study their interrelationships in the Gulf of Saint Malo. The clusters provide solutions associated with particular bodies, or parts of bodies, allowing the analysis of different clusters of Euler solutions separately. This may allow computation of average parameters for individual causative bodies. Those measurements of the anomalous field that yield clusters also form dense clusters themselves. Application of this clustering technique thus outlines areas where the influence of different causative sources is more prominent. This allows one to focus on these areas for more detailed study, using different window sizes, structural indices, etc.

2020 ◽  
Vol 54 (10) ◽  
pp. 1038-1046
Author(s):  
Barbara J. Zarowitz

Advances in the application of artificial intelligence, digitization, technology, iCloud computing, and wearable devices in health care predict an exciting future for health care professionals and our patients. Projections suggest an older, generally healthier, better-informed but financially less secure patient population of wider cultural and ethnic diversity that live throughout the United States. A pragmatic yet structured approach is recommended to prepare health care professionals and patients for emerging pharmacotherapy needs. Clinician training should include genomics, cloud computing, use of large data sets, implementation science, and cultural competence. Patients will need support for wearable devices and reassurance regarding digital medicine.


1989 ◽  
Vol 264 (1) ◽  
pp. 175-184 ◽  
Author(s):  
L Garfinkel ◽  
D M Cohen ◽  
V W Soo ◽  
D Garfinkel ◽  
C A Kulikowski

We have developed a computer method based on artificial-intelligence techniques for qualitatively analysing steady-state initial-velocity enzyme kinetic data. We have applied our system to experiments on hexokinase from a variety of sources: yeast, ascites and muscle. Our system accepts qualitative stylized descriptions of experimental data, infers constraints from the observed data behaviour and then compares the experimentally inferred constraints with corresponding theoretical model-based constraints. It is desirable to have large data sets which include the results of a variety of experiments. Human intervention is needed to interpret non-kinetic information, differences in conditions, etc. Different strategies were used by the several experimenters whose data was studied to formulate mechanisms for their enzyme preparations, including different methods (product inhibitors or alternate substrates), different experimental protocols (monitoring enzyme activity differently), or different experimental conditions (temperature, pH or ionic strength). The different ordered and rapid-equilibrium mechanisms proposed by these experimenters were generally consistent with their data. On comparing the constraints derived from the several experimental data sets, they are found to be in much less disagreement than the mechanisms published, and some of the disagreement can be ascribed to different experimental conditions (especially ionic strength).


2003 ◽  
Vol 57 (8) ◽  
pp. 996-1006 ◽  
Author(s):  
Slobodan Šašić ◽  
Yukihiro Ozaki

In this paper we report two new developments in two-dimensional (2D) correlation spectroscopy; one is the combination of the moving window concept with 2D spectroscopy to facilitate the analysis of complex data sets, and the other is the definition of the noise level in synchronous/asynchronous maps. A graphical criterion for the latter is also proposed. The combination of the moving window concept with correlation spectra allows one to split a large data matrix into smaller and simpler subsets and to analyze them instead of computing overall correlation. A three-component system that mimics a consecutive chemical reaction is used as a model for the illustration of the two ideas. Both types of correlation matrices, variable–variable and sample–sample, are analyzed, and a very good agreement between the two is met. The proposed innovations enable one to comprehend the complexity of the data to be analyzed by 2D spectroscopy and thus to avoid the risks of over-interpretation, liable to occur whenever improper caution about the number of coexisting species in the system is taken.


2020 ◽  
Vol 36 (4) ◽  
pp. 803-825
Author(s):  
Marco Fortini

AbstractRecord linkage addresses the problem of identifying pairs of records coming from different sources and referred to the same unit of interest. Fellegi and Sunter propose an optimal statistical test in order to assign the match status to the candidate pairs, in which the needed parameters are obtained through EM algorithm directly applied to the set of candidate pairs, without recourse to training data. However, this procedure has a quadratic complexity as the two lists to be matched grow. In addition, a large bias of EM-estimated parameters is also produced in this case, so that the problem is tackled by reducing the set of candidate pairs through filtering methods such as blocking. Unfortunately, the probability that excluded pairs would be actually true-matches cannot be assessed through such methods.The present work proposes an efficient approach in which the comparison of records between lists are minimised while the EM estimates are modified by modelling tables with structural zeros in order to obtain unbiased estimates of the parameters. Improvement achieved by the suggested method is shown by means of simulations and an application based on real data.


2021 ◽  
Vol 2 (4) ◽  
pp. 1-22
Author(s):  
Jing Rui Chen ◽  
P. S. Joseph Ng

Griffith AI&BD is a technology company that uses big data platform and artificial intelligence technology to produce products for schools. The company focuses on primary and secondary school education support and data analysis assistance system and campus ARTIFICIAL intelligence products for the compulsory education stage in the Chinese market. Through big data, machine learning and data mining, scattered on campus and distributed systems enable anyone to sign up to join the huge data processing grid, and access learning support big data analysis and matching after helping students expand their knowledge in a variety of disciplines and learning and promotion. Improve the learning process based on large data sets of students, and combine ai technology to develop AI electronic devices. To provide schools with the best learning experience to survive in a competitive world.


2016 ◽  
Vol 22 (2) ◽  
pp. 342-357
Author(s):  
Carlo Iapige De Gaetani ◽  
Noemi Emanuela Cazzaniga ◽  
Riccardo Barzaghi ◽  
Mirko Reguzzoni ◽  
Barbara Betti

Collocation has been widely applied in geodesy for estimating the gravity field of the Earth both locally and globally. Particularly, this is the standard geodetic method used to combine all the available data to get an integrated estimate of any functional of the anomalous potential T. The key point of the method is the definition of proper covariance functions of the data. Covariance function models have been proposed by many authors together with the related software. In this paper a new method for finding suitable covariance models has been devised. The covariance fitting problem is reduced to an optimization problem in Linear Programming and solved by using the Simplex Method. The procedure has been implemented in a FORTRAN95 software and has been tested on simulated and real data sets. These first tests proved that the proposed method is a reliable tool for estimating proper covariance function models to be used in the collocation procedure


Author(s):  
SUNITHA YEDDULA ◽  
K. LAKSHMAIAH

Record linkage is the process of matching records from several databases that refer to the same entities. When applied on a single database, this process is known as deduplication. Increasingly, matched data are becoming important in many applications areas, because they can contain information that is not available otherwise, or that is too costly to acquire. Removing duplicate records in a single database is a crucial step in the data cleaning process, because duplicates can severely influence the outcomes of any subsequent data processing or data mining. With the increasing size of today’s databases, the complexity of the matching process becomes one of the major challenges for record linkage and deduplication. In recent years, various indexing techniques have been developed for record linkage and deduplication. They are aimed at reducing the number of record pairs to be compared in the matching process by removing obvious nonmatching pairs, while at the same time maintaining high matching quality. This paper presents a survey of variations of six indexing techniques. Their complexity is analyzed, and their performance and scalability is evaluated within an experimental framework using both synthetic and real data sets. These experiments highlight that one of the most important factors for efficient and accurate indexing for record linkage and deduplication is the proper definition of blocking keys.


2020 ◽  
Vol 24 (01) ◽  
pp. 003-011 ◽  
Author(s):  
Narges Razavian ◽  
Florian Knoll ◽  
Krzysztof J. Geras

AbstractArtificial intelligence (AI) has made stunning progress in the last decade, made possible largely due to the advances in training deep neural networks with large data sets. Many of these solutions, initially developed for natural images, speech, or text, are now becoming successful in medical imaging. In this article we briefly summarize in an accessible way the current state of the field of AI. Furthermore, we highlight the most promising approaches and describe the current challenges that will need to be solved to enable broad deployment of AI in clinical practice.


2021 ◽  
Vol 2 (2) ◽  
pp. 19-33
Author(s):  
Adam Urban ◽  
David Hick ◽  
Joerg Rainer Noennig ◽  
Dietrich Kammer

Exploring the phenomenon of artificial intelligence (AI) applications in urban planning and governance, this article reviews most current smart city developments and outlines the future potential of AI, especially in the context of participatory urban design. It concludes that especially the algorithmic analysis and synthesis of large data sets generated by massive user participation projects present a beneficial field of application that enables better design decision making, project validation, and evaluation.


2014 ◽  
Vol 24 (3) ◽  
pp. 224-237 ◽  
Author(s):  
Valerie Johnson ◽  
Sonia Ranade ◽  
David Thomas

Purpose – This paper aims to focus on a highly significant yet under-recognised concern: the huge growth in the volume of digital archival information and the implications of this shift for information professionals. Design/methodology/approach – Though data loss and format obsolescence are often considered to be the major threats to digital records, the problem of scale remains under-acknowledged. This paper discusses this issue, and the challenges it brings using a case study of a set of Second World War service records. Findings – TNA’s research has shown that it is possible to digitise large volumes of records to replace paper originals using rigorous procedures. Consequent benefits included being able to link across large data sets so that further records could be released. Practical implications – The authors will discuss whether the technical capability, plus space and cost savings will result in increased pressure to retain, and what this means in creating a feedback-loop of volume. Social implications – The work also has implications in terms of new definitions of the “original” archival record. There has been much debate on challenges to the definition of the archival record in the shift from paper to born-digital. The authors will discuss where this leaves the digitised “original” record. Originality/value – Large volumes of digitised and born-digital records are starting to arrive in records and archive stores, and the implications for retention are far wider than simply digital preservation. By sharing novel research into the practical implications of large-scale data retention, this paper showcases potential issues and some approaches to their management.


Sign in / Sign up

Export Citation Format

Share Document