large databases
Recently Published Documents


TOTAL DOCUMENTS

1029
(FIVE YEARS 211)

H-INDEX

43
(FIVE YEARS 6)

Author(s):  
Harish P. ◽  
Sreedhar S. ◽  
Kunhikoyamu . ◽  
Namboothiri M. ◽  
Devi S. ◽  
...  

Artificial intelligence (AI) can be demonstrated as intelligence demonstrated by machines.AI research has gone through different phases like simulating the brain, modeling human problem solving, formal logic, large databases of knowledge and imitating animal behavior. In the beginning of 21st century, highly mathematical statistical machine learning has dominated the field, was found useful and considered in helping to solve many challenging problems throughout industry and academia. The domain was discovered and work was done on the assumption that human intelligence can be simulated by machines. These initiate some discussions in raising queries about the mind and the ethics of creating artificial beings with human-like intelligence. Myth, fiction, and philosophy are involved in the creation of this field. The debates and discussion also point to concerns of misuse regarding this technology.  


Information ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 28
Author(s):  
Saïd Mahmoudi ◽  
Mohammed Amin Belarbi

Multimedia applications deal, in most cases, with an extremely high volume of multimedia data (2D and 3D images, sounds, videos). That is why efficient algorithms should be developed to analyze and process these large datasets. On the other hand, multimedia management is based on efficient representation of knowledge which allows efficient data processing and retrieval. The main challenge in this era is to achieve clever and quick access to these huge datasets to allow easy access to the data and in a reasonable time. In this context, large-scale image retrieval is a fundamental task. Many methods have been developed in the literature to achieve fast and efficient navigating in large databases by using the famous content-based image retrieval (CBIR) methods associated with these methods allowing a decrease in the computing time, such as dimensional reduction and hashing methods. More recently, these methods based on convolutional neural networks (CNNs) for feature extraction and image classification are widely used. In this paper, we present a comprehensive review of recent multimedia retrieval methods and algorithms applied to large datasets of 2D/3D images and videos. This editorial paper discusses the mains challenges of multimedia retrieval in a context of large databases.


2022 ◽  
Vol 14 (1) ◽  
pp. 0-0

Utility mining with negative item values has recently received interest in the data mining field due to its practical considerations. Previously, the values of utility item-sets have been taken into consideration as positive. However, in real-world applications an item-set may be related to negative item values. This paper presents a method for redesigning the ordering policy by including high utility item-sets with negative items. Initially, utility mining algorithm is used to find high utility item-sets. Then, ordering policy is estimated for high utility items considering defective and non-defective items. A numerical example is illustrated to validate the results


2022 ◽  
pp. 1192-1215
Author(s):  
Mirjana Pejic-Bach ◽  
Jasmina Pivar ◽  
Živko Krstić

Technical field of big data for prediction lures the attention of different stakeholders. The reasons are related to the potentials of the big data, which allows for learning from past behavior, discovering patterns and values, and optimizing business processes based on new insights from large databases. However, in order to fully utilize the potentials of big data, its stakeholders need to understand the scope and volume of patenting related to big data usage for prediction. Therefore, this chapter aims to perform an analysis of patenting activities related to big data usage for prediction. This is done by (1) exploring the timeline and geographic distribution of patenting activities, (2) exploring the most active assignees of technical content of interest, (3) detecting the type of the protected technical according to the international patent classification system, and (4) performing text-mining analysis to discover the topics emerging most often in patents' abstracts.


Author(s):  
Juan E Arco ◽  
Andrés Ortiz ◽  
Javier Ramírez ◽  
Yu-Dong Zhang ◽  
Juan M Górriz

The automation in the diagnosis of medical images is currently a challenging task. The use of Computer Aided Diagnosis (CAD) systems can be a powerful tool for clinicians, especially in situations when hospitals are overflowed. These tools are usually based on artificial intelligence (AI), a field that has been recently revolutionized by deep learning approaches. blackThese alternatives usually obtain a large performance based on complex solutions, leading to a high computational cost and the need of having large databases. In this work, we propose a classification framework based on sparse coding. Images are blackfirst partitioned into different tiles, and a dictionary is built after applying PCA to these tiles. The original signals are then transformed as a linear combination of the elements of the dictionary. blackThen, they are reconstructed by iteratively deactivating the elements associated with each component. Classification is finally performed employing as features the subsequent reconstruction errors. Performance is evaluated in a real context where distinguishing between four different pathologies: control versus bacterial pneumonia versus viral pneumonia versus COVID-19. blackOur system differentiates between pneumonia patients and controls with an accuracy of 97.74%, whereas in the 4-class context the accuracy is 86.73%. The excellent results and the pioneering use of sparse coding in this scenario evidence that our proposal can assist clinicians when their workload is high.


2021 ◽  
Author(s):  
Vu Van Vinh ◽  
Lam Thi Hoa Mi ◽  
Duong Thi Mong Thuy

2021 ◽  
Vol 16 (4) ◽  
pp. 30-35
Author(s):  
Prachi Gurav ◽  
Sanjeev Panandikar

As the world progresses towards automation, manual search for data from large databases also needs to keep pace. When the database includes health data, even minute aspects need careful scrutiny. Keyword search techniques are helpful in extracting data from large databases. There are two keyword search techniques: Exact and Approximate. When the user wants to search through EHR, a short search time is expected. To this end, this work investigates Metaphone (Exact search) and Similar_Text (approximate search) Techniques. We have applied keyword search to the data, which includes the symptoms and names of medicines. Our results indicate that the search time for Similar_text is better than for Metaphone.


2021 ◽  
Vol 13 (24) ◽  
pp. 13698
Author(s):  
Grigore Vasile Herman ◽  
Vasile Grama ◽  
Sorin Buhaș ◽  
Lavinia Daiana Garai ◽  
Tudor Caciora ◽  
...  

Winter sports are the main attraction for many tourist areas in Romania, contributing significantly to the development of local economies. Based on this, the study aims to analyze the ski areas in Romania, as well as the extent to which they contribute to the sustainable development of the local economy. This is particularly important as, in recent decades, climate change has significantly affected winter sports, especially skiing. Thus, an analysis of the physical characteristics of ski runs in Romania (number, length, width, level difference and slope) is accompanied with an analysis on the dynamics of the share of tourism in the local economy of winter sport locations, based on tourism turnover relative to total turnover in the local economy. Both graphic and cartographic methods were used in this study, based on an analysis of quantitative and qualitative data available for ski slopes in Romania and the host localities. We used ArcGIS 10.6 software for the preparation of graphical representations and other software to process large databases. The research results showed a great diversity regarding counties, localities and ski slopes depending on the characteristic features of ski slopes (number, length, width, level difference and slope). In our study, the evolution of the share of turnover in tourism in terms of total turnover presented several categories of localities based on their economic dependence on winter sports; the impact in this regard was found to be very differentiated between localities.


2021 ◽  
Author(s):  
Tom Altenburg ◽  
Thilo Muth ◽  
Bernhard Y. Renard

AbstractMass spectrometry-based proteomics allows to study all proteins of a sample on a molecular level. The ever increasing complexity and amount of proteomics MS-data requires powerful and yet efficient computational and statistical analysis. In particular, most recent bottom-up MS-based proteomics studies consider either a diverse pool of post-translational modifications, employ large databases – as in metaproteomics or proteogenomics, contain multiple isoforms of proteins, include unspecific cleavage sites or even combinations thereof and thus face a computationally challenging situation regarding protein identification. In order to cope with resulting large search spaces, we present a deep learning approach that jointly embeds MS/MS spectra and peptides into the same vector space such that embeddings can be compared easily and interchangeable by using euclidean distances. In contrast to existing spectrum embedding techniques, ours are learned jointly with their respective peptides and thus remain meaningful. By visualizing the learned manifold of both spectrum and peptide embeddings in correspondence to their physicochemical properties our approach becomes easily interpretable. At the same time, our joint embeddings blur the lines between spectra and protein sequences, providing a powerful framework for peptide identification. In particular, we build an open search, which allows to search multiple ten-thousands of spectra against millions of peptides within seconds. yHydra achieves identification rates that are compatible with MSFragger. Due to the open search, delta masses are assigned to each identification which allows to unrestrictedly characterize post-translational modifications. Meaningful joint embeddings allow for faster open searches and generally make downstream analysis efficient and convenient for example for integration with other omics types.Availabilityupon [email protected]


2021 ◽  
pp. 1-12
Author(s):  
Hugo Geerts ◽  
Piet van der Graaf

With the approval of aducanumab on the “Accelerated Approval Pathway” and the recognition of amyloid load as a surrogate marker, new successful therapeutic approaches will be driven by combination therapy as was the case in oncology after the launch of immune checkpoint inhibitors. However, the sheer number of therapeutic combinations substantially complicates the search for optimal combinations. Data-driven approaches based on large databases or electronic health records can identify optimal combinations and often using artificial intelligence or machine learning to crunch through many possible combinations but are limited to the pharmacology of existing marketed drugs and are highly dependent upon the quality of the training sets. Knowledge-driven in silico modeling approaches use multi-scale biophysically realistic models of neuroanatomy, physiology, and pathology and can be personalized with individual patient comedications, disease state, and genotypes to create ‘virtual twin patients’. Such models simulate effects on action potential dynamics of anatomically informed neuronal circuits driving functional clinical readouts. Informed by data-driven approaches this knowledge-driven modeling could systematically and quantitatively simulate all possible target combinations for a maximal synergistic effect on a clinically relevant functional outcomer. This approach seamlessly integrates pharmacokinetic modeling of different therapeutic modalities. A crucial requirement to constrain the parameters is the access to preferably anonymized individual patient data from completed clinical trials with various selective compounds. We believe that the combination of data- and knowledge driven modeling could be a game changer to find a cure for this devastating disease that affects the most complex organ of the universe.


Sign in / Sign up

Export Citation Format

Share Document