scholarly journals State of the art of predictive modeling for real-life applications

2021 ◽  
Author(s):  
Andreas Sepp

Predictive modeling techniques had recently witnessed significant improvement due the advances in artificial intelligence and machine learning. This research presents a survey on the methods and applications of artificial intelligence and machine learning used in predictive analytics.

2021 ◽  
Author(s):  
Andreas Sepp

Artificial intelligence and machine learning methods had significant contribution to the advancement and progress of predictive analytics. This article presents a state of the art of methods and applications of artificial intelligence and machine learning.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


2021 ◽  
Author(s):  
Kai Guo ◽  
Zhenze Yang ◽  
Chi-Hua Yu ◽  
Markus J. Buehler

This review revisits the state of the art of research efforts on the design of mechanical materials using machine learning.


2021 ◽  
Vol 11 (17) ◽  
pp. 8074
Author(s):  
Tierui Zou ◽  
Nader Aljohani ◽  
Keerthiraj Nagaraj ◽  
Sheng Zou ◽  
Cody Ruben ◽  
...  

Concerning power systems, real-time monitoring of cyber–physical security, false data injection attacks on wide-area measurements are of major concern. However, the database of the network parameters is just as crucial to the state estimation process. Maintaining the accuracy of the system model is the other part of the equation, since almost all applications in power systems heavily depend on the state estimator outputs. While much effort has been given to measurements of false data injection attacks, seldom reported work is found on the broad theme of false data injection on the database of network parameters. State-of-the-art physics-based model solutions correct false data injection on network parameter database considering only available wide-area measurements. In addition, deterministic models are used for correction. In this paper, an overdetermined physics-based parameter false data injection correction model is presented. The overdetermined model uses a parameter database correction Jacobian matrix and a Taylor series expansion approximation. The method further applies the concept of synthetic measurements, which refers to measurements that do not exist in the real-life system. A machine learning linear regression-based model for measurement prediction is integrated in the framework through deriving weights for synthetic measurements creation. Validation of the presented model is performed on the IEEE 118-bus system. Numerical results show that the approximation error is lower than the state-of-the-art, while providing robustness to the correction process. Easy-to-implement model on the classical weighted-least-squares solution, highlights real-life implementation potential aspects.


Author(s):  
Shatakshi Singh ◽  
Kanika Gautam ◽  
Prachi Singhal ◽  
Sunil Kumar Jangir ◽  
Manish Kumar

The recent development in artificial intelligence is quite astounding in this decade. Especially, machine learning is one of the core subareas of AI. Also, ML field is an incessantly growing along with evolution and becomes a rise in its demand and importance. It transmogrified the way data is extracted, analyzed, and interpreted. Computers are trained to get in a self-training mode so that when new data is fed they can learn, grow, change, and develop themselves without explicit programming. It helps to make useful predictions that can guide better decisions in a real-life situation without human interference. Selection of ML tool is always a challenging task, since choosing an appropriate tool can end up saving time as well as making it faster and easier to provide any solution. This chapter provides a classification of various machine learning tools on the following aspects: for non-programmers, for model deployment, for Computer vision, natural language processing, and audio for reinforcement learning and data mining.


2018 ◽  
Vol 186 ◽  
pp. 09004
Author(s):  
André Schaaff ◽  
Marc Wenger

The work environment has deeply evolved in recent decades with the generalisation of IT in terms of hardware, online resources and software. Librarians do not escape this movement and their working environment is becoming essentially digital (databases, online publications, Wikis, specialised software, etc.). With the Big Data era, new tools will be available, implementing artificial intelligence, text mining, machine learning, etc. Most of these technologies already exist but they will become widespread and strongly impact our ways of working. The development of social networks that are "business" oriented will also have an increasing influence. In this context, it is interesting to reflect on how the work environment of librarians will evolve. Maintaining interest in the daily work is fundamental and over-automation is not desirable. It is imperative to keep the human-driven factor. We draw on state of the art new technologies which impact their work, and initiate a discussion about how to integrate them while preserving their expertise.


2021 ◽  
Vol 295 (2) ◽  
pp. 97-100
Author(s):  
K. Seniva ◽  

This article discusses the main ways of using neural networks and machine learning methods of various types in computer games. Machine learning and neural networks are hot topics in many technology fields. One of them is the creation of computer games, where new tools are used to make games more interesting. Remastered and modified games with neural networks have become a new trend. One of the most popular ways to implement artificial intelligence is neural networks. They are used in everything from medicine to the entertainment industry. But one of the most promising areas for their development is games. The game world is an ideal platform for testing artificial intelligence without the danger of harming nature or people. Making bots more complex is just a small part of what neural networks can do. They are also actively used in game development, and in some areas they already make people feel uncomfortable. Research is ongoing on color and light correction, real-time character animation and behavior control. The main types of neural networks that can learn such functions are considered. Neural networks learn (and self-learn) very quickly. The more primitive the task, the faster the person will become unnecessary. This is already noticeable in the gaming industry, but will soon spread to other areas of life, because games are just a convenient platform for experimenting with artificial intelligence before its implementation in real life. The main problem faced by scientists is that it is difficult for neural networks to copy the mechanics of the game. There are some achievements in this direction, but research continues. Therefore, in the future, real specialists will be required for the development of games for a long time, although AI is already coping with some tasks.


2020 ◽  
Vol 73 (4) ◽  
pp. 275-284
Author(s):  
Dukyong Yoon ◽  
Jong-Hwan Jang ◽  
Byung Jin Choi ◽  
Tae Young Kim ◽  
Chang Ho Han

Biosignals such as electrocardiogram or photoplethysmogram are widely used for determining and monitoring the medical condition of patients. It was recently discovered that more information could be gathered from biosignals by applying artificial intelligence (AI). At present, one of the most impactful advancements in AI is deep learning. Deep learning-based models can extract important features from raw data without feature engineering by humans, provided the amount of data is sufficient. This AI-enabled feature presents opportunities to obtain latent information that may be used as a digital biomarker for detecting or predicting a clinical outcome or event without further invasive evaluation. However, the black box model of deep learning is difficult to understand for clinicians familiar with a conventional method of analysis of biosignals. A basic knowledge of AI and machine learning is required for the clinicians to properly interpret the extracted information and to adopt it in clinical practice. This review covers the basics of AI and machine learning, and the feasibility of their application to real-life situations by clinicians in the near future.


Sign in / Sign up

Export Citation Format

Share Document