scholarly journals A Novel Technique to Regenerate Sculpture Using Generative Adversarial Network

Author(s):  
S.A.K. Jainulabudeen ◽  
H. Shalma ◽  
S. Gowri Shankar ◽  
D. Anuradha ◽  
K. Soniya

Dancing, music or any format of art has been a prominent thing from the past centuries. The many dynasties ruled the nation for centuries but every king encouraged the art one way or the other. The present day is just a minute part of the finest part of that era of art; the art of any form had been lost in the shadows to redeem the lost art we are going to use the latest technology like machine learning and artificial intelligence. The art lovers of the present age can seek the knowledge of lost art through this modern day technology. The retrieval of this art can only be done if there is a possibility to learn their language which helps in reading the old sculptures or the paintings on the walls of the ancient architecture. Now using the present day technology we are going to recoup that lost art through reading the walls of those structures where the art has been hidden for centuries. So at present we do not allow the art to continue to fall into shadow and extinguish later on, thus in this paper we present a DC-GAN model which has been created to inherit all the artistic skills of our ancestors by training from the key images of art designed as sculptures by our forefathers.

Electronics ◽  
2022 ◽  
Vol 11 (2) ◽  
pp. 245
Author(s):  
Konstantinos G. Liakos ◽  
Georgios K. Georgakilas ◽  
Fotis C. Plessas ◽  
Paris Kitsos

A significant problem in the field of hardware security consists of hardware trojan (HT) viruses. The insertion of HTs into a circuit can be applied for each phase of the circuit chain of production. HTs degrade the infected circuit, destroy it or leak encrypted data. Nowadays, efforts are being made to address HTs through machine learning (ML) techniques, mainly for the gate-level netlist (GLN) phase, but there are some restrictions. Specifically, the number and variety of normal and infected circuits that exist through the free public libraries, such as Trust-HUB, are based on the few samples of benchmarks that have been created from circuits large in size. Thus, it is difficult, based on these data, to develop robust ML-based models against HTs. In this paper, we propose a new deep learning (DL) tool named Generative Artificial Intelligence Netlists SynthesIS (GAINESIS). GAINESIS is based on the Wasserstein Conditional Generative Adversarial Network (WCGAN) algorithm and area–power analysis features from the GLN phase and synthesizes new normal and infected circuit samples for this phase. Based on our GAINESIS tool, we synthesized new data sets, different in size, and developed and compared seven ML classifiers. The results demonstrate that our new generated data sets significantly enhance the performance of ML classifiers compared with the initial data set of Trust-HUB.


Genes ◽  
2021 ◽  
Vol 12 (10) ◽  
pp. 1511
Author(s):  
Giovanna Cilluffo ◽  
Salvatore Fasola ◽  
Giuliana Ferrante ◽  
Velia Malizia ◽  
Laura Montalbano ◽  
...  

This narrative review aims to provide an overview of the main Machine Learning (ML) techniques and their applications in pharmacogenetics (such as antidepressant, anti-cancer and warfarin drugs) over the past 10 years. ML deals with the study, the design and the development of algorithms that give computers capability to learn without being explicitly programmed. ML is a sub-field of artificial intelligence, and to date, it has demonstrated satisfactory performance on a wide range of tasks in biomedicine. According to the final goal, ML can be defined as Supervised (SML) or as Unsupervised (UML). SML techniques are applied when prediction is the focus of the research. On the other hand, UML techniques are used when the outcome is not known, and the goal of the research is unveiling the underlying structure of the data. The increasing use of sophisticated ML algorithms will likely be instrumental in improving knowledge in pharmacogenetics.


Author(s):  
Shweta Negi ◽  
Mydhili Jayachandran ◽  
Shikha Upadhyay

The Deepfake algorithm allows its user to create fake images, audios, videos that gives very real impression but is fake in real sense. This degree of technology is achieved due to advancements in Deep Learning, Machine Learning, Artificial Intelligence and Neural Networking that is a combination of algorithms like generative adversarial network (GAN), autoencoders etc. Any technology has its positive and negative repercussions. Deep fake can come in use for helping people who have lost their speech to give them new improved voice, commercially deepfake can be used in improving animation or movie quality putting in creative imagination to work as well is therapeutic to people who have lost their dear once. Negative aspects of deep fake include creating fake images, videos, audios that look very real can cause threats to an individual’s privacy, organizations, democracy, and even national security. This review paper presents history on how deep fake emerged, will comprehend on how it works including various algorithms, major research works done on understanding deep fakes in the literature and most importantly discuss recent advancements in detection of deep fake methods and its robust preventive measures.


2017 ◽  
Author(s):  
Benjamin Sanchez-Lengeling ◽  
Carlos Outeiral ◽  
Gabriel L. Guimaraes ◽  
Alan Aspuru-Guzik

Molecular discovery seeks to generate chemical species tailored to very specific needs. In this paper, we present ORGANIC, a framework based on Objective-Reinforced Generative Adversarial Networks (ORGAN), capable of producing a distribution over molecular space that matches with a certain set of desirable metrics. This methodology combines two successful techniques from the machine learning community: a Generative Adversarial Network (GAN), to create non-repetitive sensible molecular species, and Reinforcement Learning (RL), to bias this generative distribution towards certain attributes. We explore several applications, from optimization of random physicochemical properties to candidates for drug discovery and organic photovoltaic material design.


2020 ◽  
Vol 114 ◽  
pp. 242-245
Author(s):  
Jootaek Lee

The term, Artificial Intelligence (AI), has changed since it was first coined by John MacCarthy in 1956. AI, believed to have been created with Kurt Gödel's unprovable computational statements in 1931, is now called deep learning or machine learning. AI is defined as a computer machine with the ability to make predictions about the future and solve complex tasks, using algorithms. The AI algorithms are enhanced and become effective with big data capturing the present and the past while still necessarily reflecting human biases into models and equations. AI is also capable of making choices like humans, mirroring human reasoning. AI can help robots to efficiently repeat the same labor intensive procedures in factories and can analyze historic and present data efficiently through deep learning, natural language processing, and anomaly detection. Thus, AI covers a spectrum of augmented intelligence relating to prediction, autonomous intelligence relating to decision making, automated intelligence for labor robots, and assisted intelligence for data analysis.


Proceedings ◽  
2021 ◽  
Vol 77 (1) ◽  
pp. 17
Author(s):  
Andrea Giussani

In the last decade, advances in statistical modeling and computer science have boosted the production of machine-produced contents in different fields: from language to image generation, the quality of the generated outputs is remarkably high, sometimes better than those produced by a human being. Modern technological advances such as OpenAI’s GPT-2 (and recently GPT-3) permit automated systems to dramatically alter reality with synthetic outputs so that humans are not able to distinguish the real copy from its counteracts. An example is given by an article entirely written by GPT-2, but many other examples exist. In the field of computer vision, Nvidia’s Generative Adversarial Network, commonly known as StyleGAN (Karras et al. 2018), has become the de facto reference point for the production of a huge amount of fake human face portraits; additionally, recent algorithms were developed to create both musical scores and mathematical formulas. This presentation aims to stimulate participants on the state-of-the-art results in this field: we will cover both GANs and language modeling with recent applications. The novelty here is that we apply a transformer-based machine learning technique, namely RoBerta (Liu et al. 2019), to the detection of human-produced versus machine-produced text concerning fake news detection. RoBerta is a recent algorithm that is based on the well-known Bidirectional Encoder Representations from Transformers algorithm, known as BERT (Devlin et al. 2018); this is a bi-directional transformer used for natural language processing developed by Google and pre-trained over a huge amount of unlabeled textual data to learn embeddings. We will then use these representations as an input of our classifier to detect real vs. machine-produced text. The application is demonstrated in the presentation.


2018 ◽  
Vol 14 (4) ◽  
pp. 734-747 ◽  
Author(s):  
Constance de Saint Laurent

There has been much hype, over the past few years, about the recent progress of artificial intelligence (AI), especially through machine learning. If one is to believe many of the headlines that have proliferated in the media, as well as in an increasing number of scientific publications, it would seem that AI is now capable of creating and learning in ways that are starting to resemble what humans can do. And so that we should start to hope – or fear – that the creation of fully cognisant machine might be something we will witness in our life time. However, much of these beliefs are based on deep misconceptions about what AI can do, and how. In this paper, I start with a brief introduction to the principles of AI, machine learning, and neural networks, primarily intended for psychologists and social scientists, who often have much to contribute to the debates surrounding AI but lack a clear understanding of what it can currently do and how it works. I then debunk four common myths associated with AI: 1) it can create, 2) it can learn, 3) it is neutral and objective, and 4) it can solve ethically and/or culturally sensitive problems. In a third and last section, I argue that these misconceptions represent four main dangers: 1) avoiding debate, 2) naturalising our biases, 3) deresponsibilising creators and users, and 4) missing out some of the potential uses of machine learning. I finally conclude on the potential benefits of using machine learning in research, and thus on the need to defend machine learning without romanticising what it can actually do.


1940 ◽  
Vol 44 (352) ◽  
pp. 338-349
Author(s):  
A. P. West

During the past few years an extensive amount of experimental data on split flaps has been made available to the aircraft industry, through the publications of aeronautical research laboratories, both in this country and abroad. In general, each publication deals with one particular aspect of the problem, and when the effect of wing flaps on the performance of an aircraft is being estimated a certain amount of difficulty may be experienced in deciding which of the many reports available gives results most readily applicable to the case being considered ; and what allowances, if any, should be made for wing taper, flap cut-out, fuselage, etc.In this report the available data has been analysed with a view to answering these questions, and presented in such a form that it may be readily applied to determine the most probable change in the aerodynamic characteristics of a wing that may be expected from the use of this type of flap.From the appendix an estimate of the accuracy of the method can be obtained, as a comparison with full-scale data is given for lift and drag, and for the other flap characteristics the original curves have been reproduced.


2021 ◽  
Author(s):  
Arjun Singh

Abstract Drug discovery is incredibly time-consuming and expensive, averaging over 10 years and $985 million per drug. Calculating the binding affinity between a target protein and a ligand is critical for discovering viable drugs. Although supervised machine learning (ML) models can predict binding affinity accurately, they suffer from lack of interpretability and inaccurate feature selection caused by multicollinear data. This study used self-supervised ML to reveal underlying protein-ligand characteristics that strongly influence binding affinity. Protein-ligand 3D models were collected from the PDBBind database and vectorized into 2422 features per complex. LASSO Regression and hierarchical clustering were utilized to minimize multicollinearity between features. Correlation analyses and Autoencoder-based latent space representations were generated to identify features significantly influencing binding affinity. A Generative Adversarial Network was used to simulate ligands with certain counts of a significant feature, and thereby determine the effect of a feature on improving binding affinity with a given target protein. It was found that the CC and CCCN fragment counts in the ligand notably influence binding affinity. Re-pairing proteins with simulated ligands that had higher CC and CCCN fragment counts could increase binding affinity by 34.99-37.62% and 36.83%-36.94%, respectively. This discovery contributes to a more accurate representation of ligand chemistry that can increase the accuracy, explainability, and generalizability of ML models so that they can more reliably identify novel drug candidates. Directions for future work include integrating knowledge on ligand fragments into supervised ML models, examining the effect of CC and CCCN fragments on fragment-based drug design, and employing computational techniques to elucidate the chemical activity of these fragments.


2015 ◽  
Vol 3 (2) ◽  
pp. 115-126 ◽  
Author(s):  
Naresh Babu Bynagari

Artificial Intelligence (AI) is one of the most promising and intriguing innovations of modernity. Its potential is virtually unlimited, from smart music selection in personal gadgets to intelligent analysis of big data and real-time fraud detection and aversion. At the core of the AI philosophy lies an assumption that once a computer system is provided with enough data, it can learn based on that input. The more data is provided, the more sophisticated its learning ability becomes. This feature has acquired the name "machine learning" (ML). The opportunities explored with ML are plentiful today, and one of them is an ability to set up an evolving security system learning from the past cyber-fraud experiences and developing more rigorous fraud detection mechanisms. Read on to learn more about ML, the types and magnitude of fraud evidenced in modern banking, e-commerce, and healthcare, and how ML has become an innovative, timely, and efficient fraud prevention technology.


Sign in / Sign up

Export Citation Format

Share Document