Machine Learning and Artificial Intelligence in Marketing Applications During COVID-19 Pandemic

Author(s):  
L. Kuladeep Kumar

Since the outbreak of the novel SARS-CoV-2, machine learning and artificial intelligence (ML/AI) have become the powerful marketing tools to mitigate economic activities during COVID-19 pandemic. The goal of ML/AI technology is to provide data and insights so that brands can understand what’s working and what’s not. This will help marketers understand and anticipate what sort of communications work and how to deliver them. Therefore, these are such promising methods employed by various marketing providers. AI uses machine learning to adapt and make changes which impact marketing in real time. The exact impact of events such as the COVID-19 pandemic is hard to predict, but AI will help us track and anticipate these circumstances, as well as provide us with the data needed to proceed. This chapter deals with recent studies that use such advanced technology to increase researchers from different perspectives, address problems and challenges by using such an algorithm to assist marketing experts in real-world issues. This chapter also discusses suggestions conveying researchers on ML/AI-based model design, marketing experts, and policymakers on few errors encountered in the current situation while tackling the current pandemic.

Author(s):  
Petar Radanliev ◽  
David De Roure ◽  
Kevin Page ◽  
Max Van Kleek ◽  
Omar Santos ◽  
...  

AbstractMultiple governmental agencies and private organisations have made commitments for the colonisation of Mars. Such colonisation requires complex systems and infrastructure that could be very costly to repair or replace in cases of cyber-attacks. This paper surveys deep learning algorithms, IoT cyber security and risk models, and established mathematical formulas to identify the best approach for developing a dynamic and self-adapting system for predictive cyber risk analytics supported with Artificial Intelligence and Machine Learning and real-time intelligence in edge computing. The paper presents a new mathematical approach for integrating concepts for cognition engine design, edge computing and Artificial Intelligence and Machine Learning to automate anomaly detection. This engine instigates a step change by applying Artificial Intelligence and Machine Learning embedded at the edge of IoT networks, to deliver safe and functional real-time intelligence for predictive cyber risk analytics. This will enhance capacities for risk analytics and assists in the creation of a comprehensive and systematic understanding of the opportunities and threats that arise when edge computing nodes are deployed, and when Artificial Intelligence and Machine Learning technologies are migrated to the periphery of the internet and into local IoT networks.


2021 ◽  
Vol 54 (6) ◽  
pp. 1-35
Author(s):  
Ninareh Mehrabi ◽  
Fred Morstatter ◽  
Nripsuta Saxena ◽  
Kristina Lerman ◽  
Aram Galstyan

With the widespread use of artificial intelligence (AI) systems and applications in our everyday lives, accounting for fairness has gained significant importance in designing and engineering of such systems. AI systems can be used in many sensitive environments to make important and life-changing decisions; thus, it is crucial to ensure that these decisions do not reflect discriminatory behavior toward certain groups or populations. More recently some work has been developed in traditional machine learning and deep learning that address such challenges in different subdomains. With the commercialization of these systems, researchers are becoming more aware of the biases that these applications can contain and are attempting to address them. In this survey, we investigated different real-world applications that have shown biases in various ways, and we listed different sources of biases that can affect AI applications. We then created a taxonomy for fairness definitions that machine learning researchers have defined to avoid the existing bias in AI systems. In addition to that, we examined different domains and subdomains in AI showing what researchers have observed with regard to unfair outcomes in the state-of-the-art methods and ways they have tried to address them. There are still many future directions and solutions that can be taken to mitigate the problem of bias in AI systems. We are hoping that this survey will motivate researchers to tackle these issues in the near future by observing existing work in their respective fields.


2021 ◽  
Author(s):  
Nagaraju Reddicharla ◽  
Subba Ramarao Rachapudi ◽  
Indra Utama ◽  
Furqan Ahmed Khan ◽  
Prabhker Reddy Vanam ◽  
...  

Abstract Well testing is one of the vital process as part of reservoir performance monitoring. As field matures with increase in number of well stock, testing becomes tedious job in terms of resources (MPFM and test separators) and this affect the production quota delivery. In addition, the test data validation and approval follow a business process that needs up to 10 days before to accept or reject the well tests. The volume of well tests conducted were almost 10,000 and out of them around 10 To 15 % of tests were rejected statistically per year. The objective of the paper is to develop a methodology to reduce well test rejections and timely raising the flag for operator intervention to recommence the well test. This case study was applied in a mature field, which is producing for 40 years that has good volume of historical well test data is available. This paper discusses the development of a data driven Well test data analyzer and Optimizer supported by artificial intelligence (AI) for wells being tested using MPFM in two staged approach. The motivating idea is to ingest historical, real-time data, well model performance curve and prescribe the quality of the well test data to provide flag to operator on real time. The ML prediction results helps testing operations and can reduce the test acceptance turnaround timing drastically from 10 days to hours. In Second layer, an unsupervised model with historical data is helping to identify the parameters that affecting for rejection of the well test example duration of testing, choke size, GOR etc. The outcome from the modeling will be incorporated in updating the well test procedure and testing Philosophy. This approach is being under evaluation stage in one of the asset in ADNOC Onshore. The results are expected to be reducing the well test rejection by at least 5 % that further optimize the resources required and improve the back allocation process. Furthermore, real time flagging of the test Quality will help in reduction of validation cycle from 10 days hours to improve the well testing cycle process. This methodology improves integrated reservoir management compliance of well testing requirements in asset where resources are limited. This methodology is envisioned to be integrated with full field digital oil field Implementation. This is a novel approach to apply machine learning and artificial intelligence application to well testing. It maximizes the utilization of real-time data for creating advisory system that improve test data quality monitoring and timely decision-making to reduce the well test rejection.


2019 ◽  
Vol 2019 (1) ◽  
pp. 26-46 ◽  
Author(s):  
Thee Chanyaswad ◽  
Changchang Liu ◽  
Prateek Mittal

Abstract A key challenge facing the design of differential privacy in the non-interactive setting is to maintain the utility of the released data. To overcome this challenge, we utilize the Diaconis-Freedman-Meckes (DFM) effect, which states that most projections of high-dimensional data are nearly Gaussian. Hence, we propose the RON-Gauss model that leverages the novel combination of dimensionality reduction via random orthonormal (RON) projection and the Gaussian generative model for synthesizing differentially-private data. We analyze how RON-Gauss benefits from the DFM effect, and present multiple algorithms for a range of machine learning applications, including both unsupervised and supervised learning. Furthermore, we rigorously prove that (a) our algorithms satisfy the strong ɛ-differential privacy guarantee, and (b) RON projection can lower the level of perturbation required for differential privacy. Finally, we illustrate the effectiveness of RON-Gauss under three common machine learning applications – clustering, classification, and regression – on three large real-world datasets. Our empirical results show that (a) RON-Gauss outperforms previous approaches by up to an order of magnitude, and (b) loss in utility compared to the non-private real data is small. Thus, RON-Gauss can serve as a key enabler for real-world deployment of privacy-preserving data release.


Author(s):  
Enrique Lee Huamaní ◽  
◽  
Lilian Ocares Cunyarachi

Due to the pandemic caused by Covid-19, daily life has changed significantly. For this reason, biosecurity measures have been implemented to prevent the spread of the virus as an effective way to reactivate economic activities. In this sense, the present paper focuses on real-time face detection as a measure of control at the entrance to an entity, thus avoiding the spread of the virus while recognizing the identity of workers despite the use of masks and thus reducing the risk of entry of individuals outside the organization. Therefore, the objective is to contribute to the security of a company through the application of machine learning methodology. The selection of methodology is justified due to the adaptation of the same according to the interests of this project. Consequently, algorithms were used in a progressive manner, obtaining as a result the control system that was intended, since each particularity of the face of the individual was recognized in relation to its corresponding identification. Finally, the results of this article benefit the security of organizations regardless of their field or sector. Keywords— Control, Detection, Facial Recognition, Facial Mask, Face recognition, Machine learning.


2021 ◽  
Author(s):  
S. H. Al Gharbi ◽  
A. A. Al-Majed ◽  
A. Abdulraheem ◽  
S. Patil ◽  
S. M. Elkatatny

Abstract Due to high demand for energy, oil and gas companies started to drill wells in remote areas and unconventional environments. This raised the complexity of drilling operations, which were already challenging and complex. To adapt, drilling companies expanded their use of the real-time operation center (RTOC) concept, in which real-time drilling data are transmitted from remote sites to companies’ headquarters. In RTOC, groups of subject matter experts monitor the drilling live and provide real-time advice to improve operations. With the increase of drilling operations, processing the volume of generated data is beyond a human's capability, limiting the RTOC impact on certain components of drilling operations. To overcome this limitation, artificial intelligence and machine learning (AI/ML) technologies were introduced to monitor and analyze the real-time drilling data, discover hidden patterns, and provide fast decision-support responses. AI/ML technologies are data-driven technologies, and their quality relies on the quality of the input data: if the quality of the input data is good, the generated output will be good; if not, the generated output will be bad. Unfortunately, due to the harsh environments of drilling sites and the transmission setups, not all of the drilling data is good, which negatively affects the AI/ML results. The objective of this paper is to utilize AI/ML technologies to improve the quality of real-time drilling data. The paper fed a large real-time drilling dataset, consisting of over 150,000 raw data points, into Artificial Neural Network (ANN), Support Vector Machine (SVM) and Decision Tree (DT) models. The models were trained on the valid and not-valid datapoints. The confusion matrix was used to evaluate the different AI/ML models including different internal architectures. Despite the slowness of ANN, it achieved the best result with an accuracy of 78%, compared to 73% and 41% for DT and SVM, respectively. The paper concludes by presenting a process for using AI technology to improve real-time drilling data quality. To the author's knowledge based on literature in the public domain, this paper is one of the first to compare the use of multiple AI/ML techniques for quality improvement of real-time drilling data. The paper provides a guide for improving the quality of real-time drilling data.


2021 ◽  
Vol 10 (1) ◽  
pp. 77-88
Author(s):  
Sachin Pandurang Godse ◽  
Shalini Singh ◽  
Sonal Khule ◽  
Shubham Chandrakant Wakhare ◽  
Vedant Yadav

Physiotherapy is the trending medication for curing bone-related injuries and pain. In many cases, due to sudden jerks or accidents, the patient might suffer from severe pain. Therefore, it is the miracle medication for curing patients. The aim here is to build a framework using artificial intelligence and machine learning for providing patients with a digitalized system for physiotherapy. Even though various computer-aided assessment of physiotherapy rehabilitation exist, recent approaches for computer-aided monitoring and performance lack versatility and robustness. In the authors' approach is to come up with proposition of an application which will record patient physiotherapy exercises and also provide personalized advice based on user performance for refinement of therapy. By using OpenPose Library, the system will detect angle between the joints, and depending upon the range of motion, it will guide patients in accomplishing physiotherapy at home. It will also suggest to patients different physio-exercises. With the help of OpenPose, it is possible to render patient images or real-time video.


Author(s):  
Mamata Rath ◽  
Sushruta Mishra

Machine learning is a field that is developed out of artificial intelligence (AI). Applying AI, we needed to manufacture better and keen machines. Be that as it may, aside from a couple of simple errands, for example, finding the briefest way between two points, it isn't to program more mind boggling and continually developing difficulties. There was an acknowledgment that the best way to have the capacity to accomplish this undertaking was to give machines a chance to gain from itself. This sounds like a youngster learning from itself. So, machine learning was produced as another capacity for computers. Also, machine learning is available in such huge numbers of sections of technology that we don't understand it while utilizing it. This chapter explores advanced-level security in network and real-time applications using machine learning.


2021 ◽  
pp. 164-184
Author(s):  
Saiph Savage ◽  
Carlos Toxtli ◽  
Eber Betanzos-Torres

The artificial intelligence (AI) industry has created new jobs that are essential to the real world deployment of intelligent systems. Part of the job focuses on labelling data for machine learning models or having workers complete tasks that AI alone cannot do. These workers are usually known as ‘crowd workers’—they are part of a large distributed crowd that is jointly (but separately) working on the tasks although they are often invisible to end-users, leading to workers often being paid below minimum wage and having limited career growth. In this chapter, we draw upon the field of human–computer interaction to provide research methods for studying and empowering crowd workers. We present our Computational Worker Leagues which enable workers to work towards their desired professional goals and also supply quantitative information about crowdsourcing markets. This chapter demonstrates the benefits of this approach and highlights important factors to consider when researching the experiences of crowd workers.


Sign in / Sign up

Export Citation Format

Share Document