scholarly journals Indonesia Network Infrastructures and Workforce Adequacy to Implement Machine Learning for Large-Scale Manufacturing

2021 ◽  
Vol 8 (1) ◽  
pp. 1-16
Author(s):  
Steven Anderson ◽  
Ansarullah Lawi

Technological development prior to industrial revolution 4.0 incentivized manufacturing industries to invest into digital industry with the aim of increasing the capability and efficiency in manufacturing activity. Major manufacturing industry has begun implementing cyber-physical system in industrial monitoring and control. The system itself will generate large volumes of data. The ability to process those big data requires algorithm called machine learning because of its ability to read patterns of big data for producing useful information. This study conducted on premises of Indonesia’s current network infrastructure and workforce capability on supporting the implementation of machine learning especially in large-scale manufacture. That will be compared with countries that have a positive stance in implementing machine learning in manufacturing. The conclusions that can be drawn from this research are Indonesia current infrastructure and workforce is still unable to fully support the implementation of machine learning technology in manufacturing industry and improvements are needed.

2021 ◽  
Author(s):  
Bohdan Polishchuk ◽  
Andrii Berko ◽  
Lyubomyr Chyrun ◽  
Myroslava Bublyk ◽  
Vadim Schuchmann

Author(s):  
Anastasiia Ivanitska ◽  
Dmytro Ivanov ◽  
Ludmila Zubik

The analysis of the available methods and models of formation of recommendations for the potential buyer in network information systems for the purpose of development of effective modules of selection of advertising is executed. The effectiveness of the use of machine learning technologies for the analysis of user preferences based on the processing of data on purchases made by users with a similar profile is substantiated. A model of recommendation formation based on machine learning technology is proposed, its work on test data sets is tested and the adequacy of the RMSE model is assessed. Keywords: behavior prediction; advertising based on similarity; collaborative filtering; matrix factorization; big data; machine learning


2021 ◽  
Vol 65 (8) ◽  
pp. 51-60
Author(s):  
Yujeong Kim

Today, each country has interest in digital economy and has established and implemented policies aimed at digital technology development and digital transformation for the transition to the digital economy. In particular, interest in digital technologies such as big data, 5G, and artificial intelligence, which are recognized as important factors in the digital economy, has been increasing recently, and it is a time when the role of the government for technological development and international cooperation becomes important. In addition to the overall digital economic policy, the Russian and Korean governments are also trying to improve their international competitiveness and take a leading position in the new economic order by establishing related technical and industrial policies. Moreover, Republic of Korea often refers to data, network and artificial intelligence as D∙N∙A, and has established policies in each of these areas in 2019. Russia is also establishing and implementing policies in the same field in 2019. Therefore, it is timely to find ways to expand cooperation between Russia and Republic of Korea. In particular, the years of 2020and 2021marks the 30th anniversary of diplomatic relations between the two countries, and not only large-scale events and exchange programs have prepared, but the relationship is deepening as part of the continued foreign policy of both countries – Russia’s Eastern Policy and New Northern Policy of Republic of Korea. Therefore, this paper compares and analyzes the policies of the two countries in big data, 5G, and artificial intelligence to seek long-term sustainable cooperation in the digital economy.


Author(s):  
E.B. LENCHUK ◽  

The article deals with the modern processes of changing the technological basis of the world economy on the basis of large-scale transition to the use of technologies of the fourth industrial revolution, shaping new markets and opens up prospects for sustainable economic growth. It is in the scientific and technological sphere that the competition between countries is shifting. Russia remains nearly invisible player in this field. The author tried to consider the main reasons for such a lag and identify a set of measures of state scientific and technological policy that can give the necessary impetus to the scientific and technological development of Russia.


2021 ◽  
Author(s):  
Mohammad Hassan Almaspoor ◽  
Ali Safaei ◽  
Afshin Salajegheh ◽  
Behrouz Minaei-Bidgoli

Abstract Classification is one of the most important and widely used issues in machine learning, the purpose of which is to create a rule for grouping data to sets of pre-existing categories is based on a set of training sets. Employed successfully in many scientific and engineering areas, the Support Vector Machine (SVM) is among the most promising methods of classification in machine learning. With the advent of big data, many of the machine learning methods have been challenged by big data characteristics. The standard SVM has been proposed for batch learning in which all data are available at the same time. The SVM has a high time complexity, i.e., increasing the number of training samples will intensify the need for computational resources and memory. Hence, many attempts have been made at SVM compatibility with online learning conditions and use of large-scale data. This paper focuses on the analysis, identification, and classification of existing methods for SVM compatibility with online conditions and large-scale data. These methods might be employed to classify big data and propose research areas for future studies. Considering its advantages, the SVM can be among the first options for compatibility with big data and classification of big data. For this purpose, appropriate techniques should be developed for data preprocessing in order to covert data into an appropriate form for learning. The existing frameworks should also be employed for parallel and distributed processes so that SVMs can be made scalable and properly online to be able to handle big data.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yanyang Bai ◽  
Xuesheng Zhang

With the technological development and change of the times in the current era, with the rapid development of science and technology and information technology, there is a gradual replacement in the traditional way of cognition. Effective data analysis is of great help to all societies, thereby drive the development of better interests. How to expand the development of the overall information resources in the process of utilization, establish a mathematical analysis–oriented evidence theory system model, improve the effective utilization of the machine, and achieve the goal of comprehensively predicting the target behavior? The main goal of this article is to use machine learning technology; this article defines the main prediction model by python programming language, analyzes and forecasts the data of previous World Cup, and establishes the analysis and prediction model of football field by K-mean and DPC clustering algorithm. Python programming is used to implement the algorithm. The data of the previous World Cup football matches are selected, and the built model is used for the predictive analysis on the Python platform; the calculation method based on the DPC-K-means algorithm is used to determine the accuracy and probability of the variables through the calculation results, which develops results in specific competitions. Research shows how the machine wins and learns the efficiency of the production process, and the machine learning process, the reliability, and accuracy of the prediction results are improved by more than 55%, which proves that mobile algorithm technology has a high level of predictive analysis on the World Cup football stadium.


Author(s):  
Manjunath Thimmasandra Narayanapppa ◽  
T. P. Puneeth Kumar ◽  
Ravindra S. Hegadi

Recent technological advancements have led to generation of huge volume of data from distinctive domains (scientific sensors, health care, user-generated data, finical companies and internet and supply chain systems) over the past decade. To capture the meaning of this emerging trend the term big data was coined. In addition to its huge volume, big data also exhibits several unique characteristics as compared with traditional data. For instance, big data is generally unstructured and require more real-time analysis. This development calls for new system platforms for data acquisition, storage, transmission and large-scale data processing mechanisms. In recent years analytics industries interest expanding towards the big data analytics to uncover potentials concealed in big data, such as hidden patterns or unknown correlations. The main goal of this chapter is to explore the importance of machine learning algorithms and computational environment including hardware and software that is required to perform analytics on big data.


Author(s):  
Bradford William Hesse

The presence of large-scale data systems can be felt, consciously or not, in almost every facet of modern life, whether through the simple act of selecting travel options online, purchasing products from online retailers, or navigating through the streets of an unfamiliar neighborhood using global positioning system (GPS) mapping. These systems operate through the momentum of big data, a term introduced by data scientists to describe a data-rich environment enabled by a superconvergence of advanced computer-processing speeds and storage capacities; advanced connectivity between people and devices through the Internet; the ubiquity of smart, mobile devices and wireless sensors; and the creation of accelerated data flows among systems in the global economy. Some researchers have suggested that big data represents the so-called fourth paradigm in science, wherein the first paradigm was marked by the evolution of the experimental method, the second was brought about by the maturation of theory, the third was marked by an evolution of statistical methodology as enabled by computational technology, while the fourth extended the benefits of the first three, but also enabled the application of novel machine-learning approaches to an evidence stream that exists in high volume, high velocity, high variety, and differing levels of veracity. In public health and medicine, the emergence of big data capabilities has followed naturally from the expansion of data streams from genome sequencing, protein identification, environmental surveillance, and passive patient sensing. In 2001, the National Committee on Vital and Health Statistics published a road map for connecting these evidence streams to each other through a national health information infrastructure. Since then, the road map has spurred national investments in electronic health records (EHRs) and motivated the integration of public surveillance data into analytic platforms for health situational awareness. More recently, the boom in consumer-oriented mobile applications and wireless medical sensing devices has opened up the possibility for mining new data flows directly from altruistic patients. In the broader public communication sphere, the ability to mine the digital traces of conversation on social media presents an opportunity to apply advanced machine learning algorithms as a way of tracking the diffusion of risk communication messages. In addition to utilizing big data for improving the scientific knowledge base in risk communication, there will be a need for health communication scientists and practitioners to work as part of interdisciplinary teams to improve the interfaces to these data for professionals and the public. Too much data, presented in disorganized ways, can lead to what some have referred to as “data smog.” Much work will be needed for understanding how to turn big data into knowledge, and just as important, how to turn data-informed knowledge into action.


Molecules ◽  
2019 ◽  
Vol 24 (11) ◽  
pp. 2097 ◽  
Author(s):  
Ambrose Plante ◽  
Derek M. Shore ◽  
Giulia Morra ◽  
George Khelashvili ◽  
Harel Weinstein

G protein-coupled receptors (GPCRs) play a key role in many cellular signaling mechanisms, and must select among multiple coupling possibilities in a ligand-specific manner in order to carry out a myriad of functions in diverse cellular contexts. Much has been learned about the molecular mechanisms of ligand-GPCR complexes from Molecular Dynamics (MD) simulations. However, to explore ligand-specific differences in the response of a GPCR to diverse ligands, as is required to understand ligand bias and functional selectivity, necessitates creating very large amounts of data from the needed large-scale simulations. This becomes a Big Data problem for the high dimensionality analysis of the accumulated trajectories. Here we describe a new machine learning (ML) approach to the problem that is based on transforming the analysis of GPCR function-related, ligand-specific differences encoded in the MD simulation trajectories into a representation recognizable by state-of-the-art deep learning object recognition technology. We illustrate this method by applying it to recognize the pharmacological classification of ligands bound to the 5-HT2A and D2 subtypes of class-A GPCRs from the serotonin and dopamine families. The ML-based approach is shown to perform the classification task with high accuracy, and we identify the molecular determinants of the classifications in the context of GPCR structure and function. This study builds a framework for the efficient computational analysis of MD Big Data collected for the purpose of understanding ligand-specific GPCR activity.


Sign in / Sign up

Export Citation Format

Share Document