Electronic Records Management - An Old Solution to a New Problem

Big Data ◽  
2016 ◽  
pp. 2249-2274
Author(s):  
Chinh Nguyen ◽  
Rosemary Stockdale ◽  
Helana Scheepers ◽  
Jason Sargent

The rapid development of technology and interactive nature of Government 2.0 (Gov 2.0) is generating large data sets for Government, resulting in a struggle to control, manage, and extract the right information. Therefore, research into these large data sets (termed Big Data) has become necessary. Governments are now spending significant finances on storing and processing vast amounts of information because of the huge proliferation and complexity of Big Data and a lack of effective records management. On the other hand, there is a method called Electronic Records Management (ERM), for controlling and governing the important data of an organisation. This paper investigates the challenges identified from reviewing the literature for Gov 2.0, Big Data, and ERM in order to develop a better understanding of the application of ERM to Big Data to extract useable information in the context of Gov 2.0. The paper suggests that a key building block in providing useable information to stakeholders could potentially be ERM with its well established governance policies. A framework is constructed to illustrate how ERM can play a role in the context of Gov 2.0. Future research is necessary to address the specific constraints and expectations placed on governments in terms of data retention and use.

2014 ◽  
Vol 10 (4) ◽  
pp. 94-116 ◽  
Author(s):  
Chinh Nguyen ◽  
Rosemary Stockdale ◽  
Helana Scheepers ◽  
Jason Sargent

The rapid development of technology and interactive nature of Government 2.0 (Gov 2.0) is generating large data sets for Government, resulting in a struggle to control, manage, and extract the right information. Therefore, research into these large data sets (termed Big Data) has become necessary. Governments are now spending significant finances on storing and processing vast amounts of information because of the huge proliferation and complexity of Big Data and a lack of effective records management. On the other hand, there is a method called Electronic Records Management (ERM), for controlling and governing the important data of an organisation. This paper investigates the challenges identified from reviewing the literature for Gov 2.0, Big Data, and ERM in order to develop a better understanding of the application of ERM to Big Data to extract useable information in the context of Gov 2.0. The paper suggests that a key building block in providing useable information to stakeholders could potentially be ERM with its well established governance policies. A framework is constructed to illustrate how ERM can play a role in the context of Gov 2.0. Future research is necessary to address the specific constraints and expectations placed on governments in terms of data retention and use.


2020 ◽  
Vol 10 (2) ◽  
pp. 7-10
Author(s):  
Deepti Pandey

This article provides insight into an emerging research discipline called Psychoinformatics.In the context of Psychoinformatics, we emphasize the co-operation between the disciplines of Psychology and Information Science which handles large data sets is derivative from severely used devices like smartphones or any online social networking in order to highlight  sychological qualities including both personality and mood. New challenges await psychologists considering the result “Big Data” sets because classic psychological methods will only in part be able to analyze this data derived from ubiquitous mobile devices as well as other everyday technologies. Consequently, psychologist must enrich their scientific methods through the inclusion of methods from informatics. Furthermore, we also emphasize on data which is derived from Psychoinformatics to combine in a such a way to give meaningful way with data from human neuroscience. We close the article with some observations of areas for future research and problems that require consideration within this new discipline.


2014 ◽  
Vol 24 (2) ◽  
pp. 122-141 ◽  
Author(s):  
Victoria Louise Lemieux ◽  
Brianna Gormly ◽  
Lyse Rowledge

Purpose – This paper aims to explore the role of records management in supporting the effective use of information visualisation and visual analytics (VA) to meet the challenges associated with the analysis of Big Data. Design/methodology/approach – This exploratory research entailed conducting and analysing interviews with a convenience sample of visual analysts and VA tool developers, affiliated with a major VA institute, to gain a deeper understanding of data-related issues that constrain or prevent effective visual analysis of large data sets or the use of VA tools, and analysing key emergent themes related to data challenges to map them to records management controls that may be used to address them. Findings – The authors identify key data-related issues that constrain or prevent effective visual analysis of large data sets or the use of VA tools, and identify records management controls that may be used to address these data-related issues. Originality/value – This paper discusses a relatively new field, VA, which has emerged in response to meeting the challenge of analysing big, open data. It contributes a small exploratory research study aimed at helping records professionals understand the data challenges faced by visual analysts and, by extension, data scientists for the analysis of large and heterogeneous data sets. It further aims to help records professionals identify how records management controls may be used to address data issues in the context of VA.


2021 ◽  
pp. 1-6
Author(s):  
Nicholas M. Watanabe ◽  
Stephen Shapiro ◽  
Joris Drayer

Big data and analytics have become an essential component of organizational operations. The ability to collect and interpret significantly large data sets has provided a wealth of knowledge to guide decision makers in all facets of society. This is no different in sport management where big data has been used on and off the field to guide decision making across the industry. As big data evolves, there are concerns regarding the use of enhanced analytic techniques and their advancement of knowledge and theory. This special issue addresses these concerns by advancing our understanding of the use of big data in sport management research and how it can be used to further scholarship in the sport industry. The six articles in this special issue each play a role in advancing sport analytics theory, producing new knowledge, and developing new inquiries. The implications discussed in these articles provide a foundation for future research on this evolving area within the field of sport management.


Author(s):  
Saranya N. ◽  
Saravana Selvam

After an era of managing data collection difficulties, these days the issue has turned into the problem of how to process these vast amounts of information. Scientists, as well as researchers, think that today, probably the most essential topic in computing science is Big Data. Big Data is used to clarify the huge volume of data that could exist in any structure. This makes it difficult for standard controlling approaches for mining the best possible data through such large data sets. Classification in Big Data is a procedure of summing up data sets dependent on various examples. There are distinctive classification frameworks which help us to classify data collections. A few methods that discussed in the chapter are Multi-Layer Perception Linear Regression, C4.5, CART, J48, SVM, ID3, Random Forest, and KNN. The target of this chapter is to provide a comprehensive evaluation of classification methods that are in effect commonly utilized.


Author(s):  
B. K. Tripathy ◽  
Hari Seetha ◽  
M. N. Murty

Data clustering plays a very important role in Data mining, machine learning and Image processing areas. As modern day databases have inherent uncertainties, many uncertainty-based data clustering algorithms have been developed in this direction. These algorithms are fuzzy c-means, rough c-means, intuitionistic fuzzy c-means and the means like rough fuzzy c-means, rough intuitionistic fuzzy c-means which base on hybrid models. Also, we find many variants of these algorithms which improve them in different directions like their Kernelised versions, possibilistic versions, and possibilistic Kernelised versions. However, all the above algorithms are not effective on big data for various reasons. So, researchers have been trying for the past few years to improve these algorithms in order they can be applied to cluster big data. The algorithms are relatively few in comparison to those for datasets of reasonable size. It is our aim in this chapter to present the uncertainty based clustering algorithms developed so far and proposes a few new algorithms which can be developed further.


2016 ◽  
pp. 1220-1243
Author(s):  
Ilias K. Savvas ◽  
Georgia N. Sofianidou ◽  
M-Tahar Kechadi

Big data refers to data sets whose size is beyond the capabilities of most current hardware and software technologies. The Apache Hadoop software library is a framework for distributed processing of large data sets, while HDFS is a distributed file system that provides high-throughput access to data-driven applications, and MapReduce is software framework for distributed computing of large data sets. Huge collections of raw data require fast and accurate mining processes in order to extract useful knowledge. One of the most popular techniques of data mining is the K-means clustering algorithm. In this study, the authors develop a distributed version of the K-means algorithm using the MapReduce framework on the Hadoop Distributed File System. The theoretical and experimental results of the technique prove its efficiency; thus, HDFS and MapReduce can apply to big data with very promising results.


2016 ◽  
Vol 2016 ◽  
pp. 1-10 ◽  
Author(s):  
Christian Montag ◽  
Éilish Duke ◽  
Alexander Markowetz

The present paper provides insight into an emerging research discipline calledPsychoinformatics. In the context ofPsychoinformatics, we emphasize the cooperation between the disciplines of psychology and computer science in handling large data sets derived from heavily used devices, such as smartphones or online social network sites, in order to shed light on a large number of psychological traits, including personality and mood. New challenges await psychologists in light of the resulting “Big Data” sets, because classic psychological methods will only in part be able to analyze this data derived from ubiquitous mobile devices, as well as other everyday technologies. As a consequence, psychologists must enrich their scientific methods through the inclusion of methods from informatics. The paper provides a brief review of one area of this research field, dealing mainly with social networks and smartphones. Moreover, we highlight how data derived fromPsychoinformaticscan be combined in a meaningful way with data from human neuroscience. We close the paper with some observations of areas for future research and problems that require consideration within this new discipline.


2017 ◽  
Vol 7 (1.1) ◽  
pp. 237
Author(s):  
MD. A R Quadri ◽  
B. Sruthi ◽  
A. D. SriRam ◽  
B. Lavanya

Java is one of the finest language for big data because of its write once and run anywhere nature. The new release of java 8 introduced few strategies like lambda expressions and streams which are helpful for parallel computing. Though these new strategies helps in extracting, sorting and filtering data from collections and arrays, still there are problems with it. Streams cannot properly process with the large data sets like big data. Also, there are few problems associated while executing in distributed environment. The new streams introduced in java are restricted to computations inside the single system there is no method for distributed computing over multiple systems. And streams store data in their memory and therefore cannot support huge data sets. Now, this paper cope with java 8 behalf of massive data and deed in distributed environment by providing extensions to the Programming model with distributed streams. The distributed computing of large data programming models may be consummated by introducing distributed stream frameworks.


Sign in / Sign up

Export Citation Format

Share Document