scholarly journals Pop Music Trend and Image Analysis Based on Big Data Technology

2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Jinyan Ren

With people’s pursuit of music art, a large number of singers began to analyze the trend of music in the future and create music works. Firstly, this study introduces the theory of music pop trend analysis, big data mining technology, and related algorithms. Then, the autoregressive integrated moving (ARIM), random forest, and long-term and short-term memory (LSTM) algorithms are used to establish the image analysis and prediction model, analyze the music data, and predict the music trend. The test results of the three models show that when the singer’s songs are analyzed from three aspects: collection, download, and playback times, the LSTM model can predict well the playback times. However, the LSTM model also has some defects. For example, the model cannot accurately predict some songs with large data fluctuations. At the same time, there is no big data gap between the playback times predicted by the ARIM model image analysis and the actual playback times, showing the allowable error fluctuation range. A comprehensive analysis shows that compared with the ARIM algorithm and random forest algorithm, the LSTM algorithm can predict the music trend more accurately. The research results will help many singers create songs according to the current and future music trends and will also make traditional music creation more information-based and modern.

Author(s):  
Saranya N. ◽  
Saravana Selvam

After an era of managing data collection difficulties, these days the issue has turned into the problem of how to process these vast amounts of information. Scientists, as well as researchers, think that today, probably the most essential topic in computing science is Big Data. Big Data is used to clarify the huge volume of data that could exist in any structure. This makes it difficult for standard controlling approaches for mining the best possible data through such large data sets. Classification in Big Data is a procedure of summing up data sets dependent on various examples. There are distinctive classification frameworks which help us to classify data collections. A few methods that discussed in the chapter are Multi-Layer Perception Linear Regression, C4.5, CART, J48, SVM, ID3, Random Forest, and KNN. The target of this chapter is to provide a comprehensive evaluation of classification methods that are in effect commonly utilized.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yangfan Tong ◽  
Wei Sun

This article focuses on the multidimensional construction of the multimedia network public opinion supervision mechanism, puts the research on the background of the era of big data, and based on the analysis and definition of the difference between network public opinion and network public opinion, deeply summarizes the network public opinion in the era of big data. New features analyze the opportunities and challenges faced by online public opinion in the era of big data. Based on the rational construction of the index system, this paper studies the multimedia network public opinion evaluation and prediction algorithm. Existing network public opinion assessment and prediction algorithms have shortcomings in capturing the characteristics of data sequences and the long-term dependence of data sequences, and the problems of overfitting and gradient disappearance may occur during training. Because of the above problems, based on the long-term and short-term memory network model, a regularized method is used to construct a multimedia network public opinion prediction model algorithm. This paper builds a multimedia network public opinion threat rating evaluation model based on the public opinion supervision prediction model and conducts analysis. The model constructed this time can not only improve the accuracy of public opinion assessment and prediction but also better avoid the problem of gradient disappearance and overfitting.


2016 ◽  
Vol 39 ◽  
Author(s):  
Mary C. Potter

AbstractRapid serial visual presentation (RSVP) of words or pictured scenes provides evidence for a large-capacity conceptual short-term memory (CSTM) that momentarily provides rich associated material from long-term memory, permitting rapid chunking (Potter 1993; 2009; 2012). In perception of scenes as well as language comprehension, we make use of knowledge that briefly exceeds the supposed limits of working memory.


2020 ◽  
Vol 29 (4) ◽  
pp. 710-727
Author(s):  
Beula M. Magimairaj ◽  
Naveen K. Nagaraj ◽  
Alexander V. Sergeev ◽  
Natalie J. Benafield

Objectives School-age children with and without parent-reported listening difficulties (LiD) were compared on auditory processing, language, memory, and attention abilities. The objective was to extend what is known so far in the literature about children with LiD by using multiple measures and selective novel measures across the above areas. Design Twenty-six children who were reported by their parents as having LiD and 26 age-matched typically developing children completed clinical tests of auditory processing and multiple measures of language, attention, and memory. All children had normal-range pure-tone hearing thresholds bilaterally. Group differences were examined. Results In addition to significantly poorer speech-perception-in-noise scores, children with LiD had reduced speed and accuracy of word retrieval from long-term memory, poorer short-term memory, sentence recall, and inferencing ability. Statistically significant group differences were of moderate effect size; however, standard test scores of children with LiD were not clinically poor. No statistically significant group differences were observed in attention, working memory capacity, vocabulary, and nonverbal IQ. Conclusions Mild signal-to-noise ratio loss, as reflected by the group mean of children with LiD, supported the children's functional listening problems. In addition, children's relative weakness in select areas of language performance, short-term memory, and long-term memory lexical retrieval speed and accuracy added to previous research on evidence-based areas that need to be evaluated in children with LiD who almost always have heterogenous profiles. Importantly, the functional difficulties faced by children with LiD in relation to their test results indicated, to some extent, that commonly used assessments may not be adequately capturing the children's listening challenges. Supplemental Material https://doi.org/10.23641/asha.12808607


2010 ◽  
Vol 24 (4) ◽  
pp. 249-252 ◽  
Author(s):  
Márk Molnár ◽  
Roland Boha ◽  
Balázs Czigler ◽  
Zsófia Anna Gaál

This review surveys relevant and recent data of the pertinent literature regarding the acute effect of alcohol on various kinds of memory processes with special emphasis on working memory. The characteristics of different types of long-term memory (LTM) and short-term memory (STM) processes are summarized with an attempt to relate these to various structures in the brain. LTM is typically impaired by chronic alcohol intake but according to some data a single dose of ethanol may have long lasting effects if administered at a critically important age. The most commonly seen deleterious acute effect of alcohol to STM appears following large doses of ethanol in conditions of “binge drinking” causing the “blackout” phenomenon. However, with the application of various techniques and well-structured behavioral paradigms it is possible to detect, albeit occasionally, subtle changes of cognitive processes even as a result of a low dose of alcohol. These data may be important for the consideration of legal consequences of low-dose ethanol intake in conditions such as driving, etc.


Author(s):  
Vivek Raich ◽  
Pankaj Maurya

in the time of the Information Technology, the big data store is going on. Due to which, Huge amounts of data are available for decision makers, and this has resulted in the progress of information technology and its wide growth in many areas of business, engineering, medical, and scientific studies. Big data means that the size which is bigger in size, but there are several types, which are not easy to handle, technology is required to handle it. Due to continuous increase in the data in this way, it is important to study and manage these datasets by adjusting the requirements so that the necessary information can be obtained.The aim of this paper is to analyze some of the analytic methods and tools. Which can be applied to large data. In addition, the application of Big Data has been analyzed, using the Decision Maker working on big data and using enlightened information for different applications.


Author(s):  
. Monika ◽  
Pardeep Kumar ◽  
Sanjay Tyagi

In Cloud computing environment QoS i.e. Quality-of-Service and cost is the key element that to be take care of. As, today in the era of big data, the data must be handled properly while satisfying the request. In such case, while handling request of large data or for scientific applications request, flow of information must be sustained. In this paper, a brief introduction of workflow scheduling is given and also a detailed survey of various scheduling algorithms is performed using various parameter.


2018 ◽  
Vol 1 (1) ◽  
pp. 128-133
Author(s):  
Selvi Selvi

Economic globalization between countries becomes commonplace. Differences in financial rules are used for many parties to practice the Basic Erosion and Shifting Profit (BEPS) which leads to state losses. In tackling it has been agreed to implement Automatic Exchange of Information (AEoI), which automatically converts data into large data in the field of taxation.The research method of this paper is a literature study which combines several related literature and global and national implications using secondary data.Drawing up the conclusion that AEoI challenges have been theoretically overcome by Indonesia as a developing country. However, practically mash has not been able to find out whether it can be overcome or not because Indonesia still has not implemented AEoI


2020 ◽  
Vol 30 (Supplement_5) ◽  
Author(s):  
◽  

Abstract Countries have a wide range of lifestyles, environmental exposures and different health(care) systems providing a large natural experiment to be investigated. Through pan-European comparative studies, underlying determinants of population health can be explored and provide rich new insights into the dynamics of population health and care such as the safety, quality, effectiveness and costs of interventions. Additionally, in the big data era, secondary use of data has become one of the major cornerstones of digital transformation for health systems improvement. Several countries are reviewing governance models and regulatory framework for data reuse. Precision medicine and public health intelligence share the same population-based approach, as such, aligning secondary use of data initiatives will increase cost-efficiency of the data conversion value chain by ensuring that different stakeholders needs are accounted for since the beginning. At EU level, the European Commission has been raising awareness of the need to create adequate data ecosystems for innovative use of big data for health, specially ensuring responsible development and deployment of data science and artificial intelligence technologies in the medical and public health sectors. To this end, the Joint Action on Health Information (InfAct) is setting up the Distributed Infrastructure on Population Health (DIPoH). DIPoH provides a framework for international and multi-sectoral collaborations in health information. More specifically, DIPoH facilitates the sharing of research methods, data and results through participation of countries and already existing research networks. DIPoH's efforts include harmonization and interoperability, strengthening of the research capacity in MSs and providing European and worldwide perspectives to national data. In order to be embedded in the health information landscape, DIPoH aims to interact with existing (inter)national initiatives to identify common interfaces, to avoid duplication of the work and establish a sustainable long-term health information research infrastructure. In this workshop, InfAct lays down DIPoH's core elements in coherence with national and European initiatives and actors i.e. To-Reach, eHAction, the French Health Data Hub and ECHO. Pitch presentations on DIPoH and its national nodes will set the scene. In the format of a round table, possible collaborations with existing initiatives at (inter)national level will be debated with the audience. Synergies will be sought, reflections on community needs will be made and expectations on services will be discussed. The workshop will increase the knowledge of delegates around the latest health information infrastructure and initiatives that strive for better public health and health systems in countries. The workshop also serves as a capacity building activity to promote cooperation between initiatives and actors in the field. Key messages DIPoH an infrastructure aiming to interact with existing (inter)national initiatives to identify common interfaces, avoid duplication and enable a long-term health information research infrastructure. National nodes can improve coordination, communication and cooperation between health information stakeholders in a country, potentially reducing overlap and duplication of research and field-work.


Sign in / Sign up

Export Citation Format

Share Document