Big Data and Privatisation of Registers – Recent Developments and Thoughts from a Torrens Perspective

2018 ◽  
Author(s):  
Rod Thomas ◽  
Lynden Griggs ◽  
Rouhshi Low
Keyword(s):  
Big Data ◽  
2015 ◽  
Vol 32 (03) ◽  
pp. 1550019 ◽  
Author(s):  
Jie Xu ◽  
Edward Huang ◽  
Chun-Hung Chen ◽  
Loo Hay Lee

Recent advances in simulation optimization research and explosive growth in computing power have made it possible to optimize complex stochastic systems that are otherwise intractable. In the first part of this paper, we classify simulation optimization techniques into four categories based on how the search is conducted. We provide tutorial expositions on representative methods from each category, with a focus in recent developments, and compare the strengths and limitations of each category. In the second part of this paper, we review applications of simulation optimization in various contexts, with detailed discussions on health care, logistics, and manufacturing systems. Finally, we explore the potential of simulation optimization in the new era. Specifically, we discuss how simulation optimization can benefit from cloud computing and high-performance computing, its integration with big data analytics, and the value of simulation optimization to help address challenges in engineering design of complex systems.


Nowadays, the digital technologies and information systems (i.e. cloud computing and Internet of Things) generated the vast data in terabytes to extract the knowledge for making a better decision by the end users. However, these massive data require a large effort of researchers at multiple levels to analyze for decision making. To find a better development, researchers concentrated on Big Data Analysis (BDA), but the traditional databases, data techniques and platforms suffers from storage, imbalance data, scalability, insufficient accuracy, slow responsiveness and scalability, which leads to very less efficiency in Big Data (BD) context. Therefore, the main objective of this research is to present a generalized view of complete BD system that consists of various stages and major components of every stage to process the BD. In specific, the data management process describes the NoSQL databases and different Parallel Distributed File Systems (PDFS) and then, the impact of challenges, analyzed for BD with recent developments provides a better understanding that how different tools and technologies apply to solve real-life applications.


Author(s):  
Dan Stowell

Terrestrial bioacoustics, like many other domains, has recently witnessed some transformative results from the application of deep learning and big data (Stowell 2017, Mac Aodha et al. 2018, Fairbrass et al. 2018, Mercado III and Sturdy 2017). Generalising over specific projects, which bioacoustic tasks can we consider "solved"? What can we expect in the near future, and what remains hard to do? What does a bioacoustician need to understand about deep learning? This contribution will address these questions, giving the audience a concise summary of recent developments and ways forward. It builds on recent projects and evaluation campaigns led by the author (Stowell et al. 2015, Stowell et al. 2018), as well as broader developments in signal processing, machine learning and bioacoustic applications of these. We will discuss which type of deep learning networks are appropriate for audio data, how to address zoological/ecological applications which often have few available data, and issues in integrating deep learning predictions with existing workflows in statistical ecology.


Author(s):  
Jonas Andersson Schwarz

Digital media infrastructures give rise to texts that are socially interconnected in various forms of complex networks. These mediated phenomena can be analyzed through methods that trace relational data. Social network analysis (SNA) traces interconnections between social nodes, while natural language processing (NLP) traces intralinguistic properties of the text. These methods can be bracketed under the header “social big data.” Empirical and theoretical rigor begs a constructionist understanding of such data. Analysis is inherently perspective-bound; it is rarely a purely objective statistical exercise. Some kind of selection is always made, primarily out of practical necessity. Moreover, the agents observed (network participants producing the texts in question) all tend to make their own encodings, based on observational inferences, situated in the network topology. Recent developments in such methods have, for example, provided social scientific scholars with innovative means to address inconsistencies in comparative surveys in different languages, addressing issues of comparability and measurement equivalence. NLP provides novel, inductive ways of understanding word meanings as a function of their relational placement in syntagmatic and paradigmatic relations, thereby identifying biases in the relative meanings of words. Reflecting on current research projects, the chapter addresses key epistemological challenges in order to improve contextual understanding.


2019 ◽  
Vol 54 (1) ◽  
pp. 99-136
Author(s):  
Syed Wasim Abbas ◽  
Munir Ahmad ◽  
Sajid Rasul

2020 ◽  
Vol 7 (1) ◽  
Author(s):  
E. A. Huerta ◽  
Asad Khan ◽  
Edward Davis ◽  
Colleen Bushell ◽  
William D. Gropp ◽  
...  

Abstract Significant investments to upgrade and construct large-scale scientific facilities demand commensurate investments in R&D to design algorithms and computing approaches to enable scientific and engineering breakthroughs in the big data era. Innovative Artificial Intelligence (AI) applications have powered transformational solutions for big data challenges in industry and technology that now drive a multi-billion dollar industry, and which play an ever increasing role shaping human social patterns. As AI continues to evolve into a computing paradigm endowed with statistical and mathematical rigor, it has become apparent that single-GPU solutions for training, validation, and testing are no longer sufficient for computational grand challenges brought about by scientific facilities that produce data at a rate and volume that outstrip the computing capabilities of available cyberinfrastructure platforms. This realization has been driving the confluence of AI and high performance computing (HPC) to reduce time-to-insight, and to enable a systematic study of domain-inspired AI architectures and optimization schemes to enable data-driven discovery. In this article we present a summary of recent developments in this field, and describe specific advances that authors in this article are spearheading to accelerate and streamline the use of HPC platforms to design and apply accelerated AI algorithms in academia and industry.


Author(s):  
Ramjee Prasad ◽  
Purva Choudhary

Artificial Intelligence (AI) as a technology has existed for less than a century. In spite of this, it has managed to achieve great strides. The rapid progress made in this field has aroused the curiosity of many technologists around the globe and many companies across various domains are curious to explore its potential. For a field that has achieved so much in such a short duration, it is imperative that people who aim to work in Artificial Intelligence, study its origins, recent developments, and future possibilities of expansion to gain a better insight into the field. This paper encapsulates the notable progress made in Artificial Intelligence starting from its conceptualization to its current state and future possibilities, in various fields. It covers concepts like a Turing machine, Turing test, historical developments in Artificial Intelligence, expert systems, big data, robotics, current developments in Artificial Intelligence across various fields, and future possibilities of exploration.


2021 ◽  
Author(s):  
Dominic Vincent Ligot ◽  
Mark Toledo

This paper presents Project AEDES, a big data early warning, and surveillance system for dengue. The project utilizes Google Search Trends to detect public interest and panics related to dengue. Using Google Search Trends, precipitation, and temperature readings from climate data, the system nowcasts probable dengue cases and dengue-related deaths. The system utilizes FAPAR, NDVI, and NDWI readings from remote sensing to detect likely mosquito hotspots to prioritize interventions. We discuss the origin and development of the project and recent developments. We also discuss the current state of development and directions for further work.


Sign in / Sign up

Export Citation Format

Share Document