scholarly journals Artificial Intelligence

2019 ◽  
pp. 357-385
Author(s):  
Eric Guérin ◽  
Orhun Aydin ◽  
Ali Mahdavi-Amiri

Abstract In this chapter, we provide an overview of different artificial intelligence (AI) and machine learning (ML) techniques and discuss how these techniques have been employed in managing geospatial data sets as they pertain to Digital Earth. We introduce statistical ML methods that are frequently used in spatial problems and their applications. We discuss generative models, one of the hottest topics in ML, to illustrate the possibility of generating new data sets that can be used to train data analysis methods or to create new possibilities for Digital Earth such as virtual reality or augmented reality. We finish the chapter with a discussion of deep learning methods that have high predictive power and have shown great promise in data analysis of geospatial data sets provided by Digital Earth.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Muhammad Javed Iqbal ◽  
Zeeshan Javed ◽  
Haleema Sadia ◽  
Ijaz A. Qureshi ◽  
Asma Irshad ◽  
...  

AbstractArtificial intelligence (AI) is the use of mathematical algorithms to mimic human cognitive abilities and to address difficult healthcare challenges including complex biological abnormalities like cancer. The exponential growth of AI in the last decade is evidenced to be the potential platform for optimal decision-making by super-intelligence, where the human mind is limited to process huge data in a narrow time range. Cancer is a complex and multifaced disorder with thousands of genetic and epigenetic variations. AI-based algorithms hold great promise to pave the way to identify these genetic mutations and aberrant protein interactions at a very early stage. Modern biomedical research is also focused to bring AI technology to the clinics safely and ethically. AI-based assistance to pathologists and physicians could be the great leap forward towards prediction for disease risk, diagnosis, prognosis, and treatments. Clinical applications of AI and Machine Learning (ML) in cancer diagnosis and treatment are the future of medical guidance towards faster mapping of a new treatment for every individual. By using AI base system approach, researchers can collaborate in real-time and share knowledge digitally to potentially heal millions. In this review, we focused to present game-changing technology of the future in clinics, by connecting biology with Artificial Intelligence and explain how AI-based assistance help oncologist for precise treatment.


Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 245
Author(s):  
Konstantinos G. Liakos ◽  
Georgios K. Georgakilas ◽  
Fotis C. Plessas ◽  
Paris Kitsos

A significant problem in the field of hardware security consists of hardware trojan (HT) viruses. The insertion of HTs into a circuit can be applied for each phase of the circuit chain of production. HTs degrade the infected circuit, destroy it or leak encrypted data. Nowadays, efforts are being made to address HTs through machine learning (ML) techniques, mainly for the gate-level netlist (GLN) phase, but there are some restrictions. Specifically, the number and variety of normal and infected circuits that exist through the free public libraries, such as Trust-HUB, are based on the few samples of benchmarks that have been created from circuits large in size. Thus, it is difficult, based on these data, to develop robust ML-based models against HTs. In this paper, we propose a new deep learning (DL) tool named Generative Artificial Intelligence Netlists SynthesIS (GAINESIS). GAINESIS is based on the Wasserstein Conditional Generative Adversarial Network (WCGAN) algorithm and area–power analysis features from the GLN phase and synthesizes new normal and infected circuit samples for this phase. Based on our GAINESIS tool, we synthesized new data sets, different in size, and developed and compared seven ML classifiers. The results demonstrate that our new generated data sets significantly enhance the performance of ML classifiers compared with the initial data set of Trust-HUB.


2020 ◽  
Author(s):  
Amol Thakkar ◽  
Veronika Chadimova ◽  
Esben Jannik Bjerrum ◽  
Ola Engkvist ◽  
Jean-Louis Reymond

<p>Computer aided synthesis planning (CASP) is part of a suite of artificial intelligence (AI) based tools that are able to propose synthesis to a wide range of compounds. However, at present they are too slow to be used to screen the synthetic feasibility of millions of generated or enumerated compounds before identification of potential bioactivity by virtual screening (VS) workflows. Herein we report a machine learning (ML) based method capable of classifying whether a synthetic route can be identified for a particular compound or not by the CASP tool AiZynthFinder. The resulting ML models return a retrosynthetic accessibility score (RAscore) of any molecule of interest, and computes 4,500 times faster than retrosynthetic analysis performed by the underlying CASP tool. The RAscore should be useful for the pre-screening millions of virtual molecules from enumerated databases or generative models for synthetic accessibility and produce higher quality databases for virtual screening of biological activity. </p>


Author(s):  
Fernando Enrique Lopez Martinez ◽  
Edward Rolando Núñez-Valdez

IoT, big data, and artificial intelligence are currently three of the most relevant and trending pieces for innovation and predictive analysis in healthcare. Many healthcare organizations are already working on developing their own home-centric data collection networks and intelligent big data analytics systems based on machine-learning principles. The benefit of using IoT, big data, and artificial intelligence for community and population health is better health outcomes for the population and communities. The new generation of machine-learning algorithms can use large standardized data sets generated in healthcare to improve the effectiveness of public health interventions. A lot of these data come from sensors, devices, electronic health records (EHR), data generated by public health nurses, mobile data, social media, and the internet. This chapter shows a high-level implementation of a complete solution of IoT, big data, and machine learning implemented in the city of Cartagena, Colombia for hypertensive patients by using an eHealth sensor and Amazon Web Services components.


2020 ◽  
Vol 10 (6) ◽  
pp. 1343-1358
Author(s):  
Ernesto Iadanza ◽  
Rachele Fabbri ◽  
Džana Bašić-ČiČak ◽  
Amedeo Amedei ◽  
Jasminka Hasic Telalovic

Abstract This article aims to provide a thorough overview of the use of Artificial Intelligence (AI) techniques in studying the gut microbiota and its role in the diagnosis and treatment of some important diseases. The association between microbiota and diseases, together with its clinical relevance, is still difficult to interpret. The advances in AI techniques, such as Machine Learning (ML) and Deep Learning (DL), can help clinicians in processing and interpreting these massive data sets. Two research groups have been involved in this Scoping Review, working in two different areas of Europe: Florence and Sarajevo. The papers included in the review describe the use of ML or DL methods applied to the study of human gut microbiota. In total, 1109 papers were considered in this study. After elimination, a final set of 16 articles was considered in the scoping review. Different AI techniques were applied in the reviewed papers. Some papers applied ML, while others applied DL techniques. 11 papers evaluated just different ML algorithms (ranging from one to eight algorithms applied to one dataset). The remaining five papers examined both ML and DL algorithms. The most applied ML algorithm was Random Forest and it also exhibited the best performances.


2019 ◽  
Vol 76 (6) ◽  
pp. 1681-1690 ◽  
Author(s):  
Alexander Winkler-Schwartz ◽  
Vincent Bissonnette ◽  
Nykan Mirchi ◽  
Nirros Ponnudurai ◽  
Recai Yilmaz ◽  
...  

2014 ◽  
Vol 19 (5) ◽  
pp. 640-650 ◽  
Author(s):  
Shantanu Singh ◽  
Anne E. Carpenter ◽  
Auguste Genovesio

Target-based high-throughput screening (HTS) has recently been critiqued for its relatively poor yield compared to phenotypic screening approaches. One type of phenotypic screening, image-based high-content screening (HCS), has been seen as particularly promising. In this article, we assess whether HCS is as high content as it can be. We analyze HCS publications and find that although the number of HCS experiments published each year continues to grow steadily, the information content lags behind. We find that a majority of high-content screens published so far (60−80%) made use of only one or two image-based features measured from each sample and disregarded the distribution of those features among each cell population. We discuss several potential explanations, focusing on the hypothesis that data analysis traditions are to blame. This includes practical problems related to managing large and multidimensional HCS data sets as well as the adoption of assay quality statistics from HTS to HCS. Both may have led to the simplification or systematic rejection of assays carrying complex and valuable phenotypic information. We predict that advanced data analysis methods that enable full multiparametric data to be harvested for entire cell populations will enable HCS to finally reach its potential.


2011 ◽  
Vol 29 (3) ◽  
pp. 467-491 ◽  
Author(s):  
H. Vanhamäki ◽  
O. Amm

Abstract. We present a review of selected data-analysis methods that are frequently applied in studies of ionospheric electrodynamics and magnetosphere-ionosphere coupling using ground-based and space-based data sets. Our focus is on methods that are data driven (not simulations or statistical models) and can be used in mesoscale studies, where the analysis area is typically some hundreds or thousands of km across. The selection of reviewed methods is such that most combinations of measured input data (electric field, conductances, magnetic field and currents) that occur in practical applications are covered. The techniques are used to solve the unmeasured parameters from Ohm's law and Maxwell's equations, possibly with help of some simplifying assumptions. In addition to reviewing existing data-analysis methods, we also briefly discuss possible extensions that may be used for upcoming data sets.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Li Ma

With the development of language research and language teaching, people realize that grammatical competence is an important part of communicative competence. In foreign language teaching, grammar teaching is not only necessary but also the main way to achieve the goal of communicative competence. This article mainly studies the virtual reality technology college English immersive context teaching method based on artificial intelligence and machine learning. The purpose is to improve students’ English learning ability. Through the comparative teaching experiment of two classes of freshmen in a university, the experimental class conducted VR technology-based immersive virtual context teaching from the perspective of constructivism, while the control class adopted common multimedia equipment and traditional teaching methods. In the classroom, teachers occupy most of the time, students only passively receive a lot of information from teachers, they have little chance to participate in the exchange of information and express ideas in the target language, and most of the time they are “immersed” in the Chinese environment. The overall English level was also better than that of the control class, with an average score of 2.8 points higher. This shows that college English immersive context teaching combining constructivism theory and VR technology can indeed improve students’ English level.


Sign in / Sign up

Export Citation Format

Share Document