diverse data
Recently Published Documents


TOTAL DOCUMENTS

422
(FIVE YEARS 208)

H-INDEX

24
(FIVE YEARS 7)

Author(s):  
I Made Agus Wirawan ◽  
Retantyo Wardoyo ◽  
Danang Lelono

Electroencephalogram (EEG) signals in recognizing emotions have several advantages. Still, the success of this study, however, is strongly influenced by: i) the distribution of the data used, ii) consider of differences in participant characteristics, and iii) consider the characteristics of the EEG signals. In response to these issues, this study will examine three important points that affect the success of emotion recognition packaged in several research questions: i) What factors need to be considered to generate and distribute EEG data?, ii) How can EEG signals be generated with consideration of differences in participant characteristics?, and iii) How do EEG signals with characteristics exist among its features for emotion recognition? The results, therefore, indicate some important challenges to be studied further in EEG signals-based emotion recognition research. These include i) determine robust methods for imbalanced EEG signals data, ii) determine the appropriate smoothing method to eliminate disturbances on the baseline signals, iii) determine the best baseline reduction methods to reduce the differences in the characteristics of the participants on the EEG signals, iv) determine the robust architecture of the capsule network method to overcome the loss of knowledge information and apply it in more diverse data set.


Author(s):  
Muhammad Waseem Akhtar ◽  
Syed Ali Hassan ◽  
Aamir Mahmood ◽  
Haejoon Jung ◽  
Hassaan Khaliq Qureshi ◽  
...  

2022 ◽  
Vol 16 (1) ◽  
pp. 0-0

Data is big, data is diverse, data comes in zillion formats, it is important to ensure the safety and security of the shared data. With existing systems limited and evolving, the objective of the current research work is to develop a robust Image Encryption technique that is adept and effective at handling heterogeneous data and can withstand state-of-the-art hacking efforts such as brute force attacks, cropping attacks, mathematical attacks, and differential attacks. The proposed Efficient DNA Cryptographic System (EDCS) model presents a pseudorandom substitution method using logistic sine cosine chaotic maps, wherein there is very little correlation between adjacent pixels, and it can decode the image with or without noise, thereby making the proposed system noise-agnostic. The proposed EDCS-based Image model using Chaotic Maps showed enhancements in parameters such as Unified Average Changing Intensity (UACI), Number of Pixels Change Rate (NPCR), Histogram, and Entropy when compared with existing image security methods.


2021 ◽  
Vol 10 (2) ◽  
pp. 31-36
Author(s):  
Alexandros ZACHARIS ◽  
Eloise JABES ◽  
Ifigenia LELLA ◽  
Evangelos REKLEITIS

This paper examines the advantages and disadvantages of executing cyber awareness exercises in two different formats: Virtual vs On-site participation. Two EU Agencies, EUSPA and ENISA have organized in the previous years Cyber Awareness exercises; a very important tool to enhance and test the organization's ability to put up resistance and respond to different cyber threats. The objective of this paper is to compare the outcomes of these awareness exercises, executed on-site through physical attendance prior to 2019 and virtually, in a remote setup in 2020, due to the restrictions posed by the pandemic of COVID-19. ENISA in collaboration with EUSPA have accumulated raw and diverse data from the evaluation reports of the cyber events mentioned above. The comparison of these data will focus on the most important success factors of a cyber awareness exercise such as: participation, cooperation (social interaction/teambuilding), effectiveness, fun, tools and identify how the location of the participants affects them. The aim of this work is to highlight through statistical analysis the benefits of a hybrid approach to the exercise’s setup, once combining elements of both virtual and on-site. Depending on the different kind of exercises, such a hybrid setup, will provide more flexibility to an exercise organizer and help maximize effectiveness, while adapting to the fluctuating working regimes of the near future; namely Teleworking. Furthermore, a modular exercise design will be proposed in order to adapt to the location limitations without impacting negatively the rate of the rest of factors analyzed.


2021 ◽  
Vol 4 (5) ◽  
pp. 1199-1218
Author(s):  
Evgeniya D. Zarubina

Minute books (pinkas) constitute one of the most valuable sources for studying the history of the Jewish communal institutions up to the 20th century. They comprise rich and diverse data on the everyday activities of the Jewish people. In the academic language, the word “pinkas” is applied not only to the communal minute books and minute books of the communal bodies but also to private minute books. The article deals with the development of this category of sources which evolved from private minute books dating back to at least the 11th century to the communal ones as well as the minute books of the communal bodies based on the dozen manuscript examples. These are mostly of European origin, however, with a few Eastern additions. This evolution process becomes visible as a result of the analysis of the manuscripts’ internal structure and composition. Special attention is paid to the techniques used to enforce this structure on codicological and paleographic levels. The data at hand suggest that at the beginning of the Modern period some of the minute books were shifted from private to the public domain. This was a response to the demand from the rapidly evolving communal institutions. To suit the widened audience of varying backgrounds the communal minute books compared to those for private use adopted a more uniform structure as well as with a set of “navigation” or referencing tools, such as captions written on margins. The early modern Italian communal minute books tend to be the most structured ones.


Webology ◽  
2021 ◽  
Vol 18 (2) ◽  
pp. 462-474
Author(s):  
Marischa Elveny ◽  
Mahyuddin KM Nasution ◽  
Muhammad Zarlis ◽  
Syahril Efendi

Business intelligence can be said to be techniques and tools as acquisition, transforming raw data into meaningful and useful information for business analysis purposes. This study aims to build business intelligence in optimizing large-scale data based on e-metrics. E-metrics are data created from electronic-based customer behavior. As more and more large data sets become available, the challenge of analyzing data sets will get bigger and bigger. Therefore, business intelligence is currently facing new challenges, but also interesting opportunities, where can describe in real time the needs of the market share. Optimization is done using adaptive multivariate regression that can be address high-dimensional data and produce accurate predictions of response variables and produce continuous models in knots based on the smallest GCV value, where large and diverse data are simplified and then modeled based on the level of behavior similarity, basic measurements of distances, attributes, times, places, and transactions between social actors. Customer purchases will represent each preferred behaviour and a formula can be used to calculate the score for each customer using 7 input variables. Adaptive multivariate regression looks for customer behaviour so as to get the results of cutting the deviation which is the determining factor for performance on the data. The results show there are strategies and information needed for a sustainable business. Where merchants who sell fast food or food stalls are more in demand by customers.


2021 ◽  
Vol 3 ◽  
Author(s):  
David Topping ◽  
Thomas J. Bannan ◽  
Hugh Coe ◽  
James Evans ◽  
Caroline Jay ◽  
...  

The increasing amount of data collected about the environment brings tremendous potential to create digital systems that can predict the impact of intended and unintended changes. With growing interest in the construction of Digital Twins across multiple sectors, combined with rapid changes to where we spend our time and the nature of pollutants we are exposed to, we find ourselves at a crossroads of opportunity with regards to air quality mitigation in cities. With this in mind, we briefly discuss the interplay between available data and state of the science on air quality, infrastructure needs and areas of opportunities that should drive subsequent planning of the digital twin ecosystem and associated components. Data driven modeling and digital twins are promoted as the most efficient route to decision making in an evolving atmosphere. However, following the diverse data streams on which these frameworks are built, they must be supported by a diverse community. This is an opportunity to build a collaborative space to facilitate closer working between instrument manufacturers, data scientists, atmospheric scientists, and user groups including but not limited to regional and national policy makers.


Semantic Web ◽  
2021 ◽  
pp. 1-3
Author(s):  
Krzysztof Janowicz ◽  
Cogan Shimizu ◽  
Pascal Hitzler ◽  
Gengchen Mai ◽  
Shirly Stephen ◽  
...  

One of the key value propositions for knowledge graphs and semantic web technologies is fostering semantic interoperability, i.e., integrating data across different themes and domains. But why do we aim at interoperability in the first place? A common answer to this question is that each individual data source only contains partial information about some phenomenon of interest. Consequently, combining multiple diverse datasets provides a more holistic perspective and enables us to answer more complex questions, e.g., those that span between the physical sciences and the social sciences. Interestingly, while these arguments are well established and go by different names, e.g., variety in the realm of big data, we seem less clear about whether the same arguments apply on the level of schemata. Put differently, we want diverse data, but do we also want diverse schemata or a single one to rule them all?


2021 ◽  
Author(s):  
Christian Windisch

Abstract This paper presents a holistic approach to modern oilfield and well surveillance through the inclusion of state-of-the-art edge computing applications in combination with a novel type of data transmission technology and algorithms developed in-house for automatic condition monitoring of SRP systems. The objective is to enable the responsible specialist staff to focus on the most important decisions regarding oilfield management, rather than wasting time with data collection and preparation. An own operated data communication system, based on LPWAN-technology transfers the dyno-cards, generated by an electric load cell, into the in-house developed production assistance software platform. Suitable programmed AI-algorithms enable automatic condition detection of the incoming dyno cards, including conversion and analysis of the corresponding subsurface dynamograms. A smart alarming system informs about occurring failure conditions and specifies whether an incident of rod rupture, pump-off condition, gas lock or paraffin precipitation occurred in the well. A surface mounted measuring device delivers liquid level and bottomhole pressure information automatically into the software. Based on these diverse data, the operations team plans the subsequent activities. The holistic application approach is illustrated using the case study of an SPR-operated well in an Austrian brownfield.


2021 ◽  
Vol 22 (3) ◽  
pp. 283-293
Author(s):  
Usha Patel ◽  
Hardik Dave ◽  
Vibha Patel

There has been extensive research in the field of Hyperspectral Image Classification using deep neural networks. The deep learning based approaches requires huge amount of labelled data samples. But in the case of Hyperspectral Image, there are less number of labelled data samples. Therefore, we can adopt Active Learning combined with deep learning based approaches to be able to extract most informative data samples. By using this technique, we can train the classifier to achieve better classification accuracies with less number of labelled data samples. There is considerable amount of research carried out for selecting diverse data samples from the pool of unlabeled data samples. We present a novel diversity-based Active Learning approach utilizing the information of clustered data distribution. We incorporate diversity criteria with Active Learning selection criteria and combine it with Convolutional Neural Network for feature extraction and classification. This approach helps us in obtaining most informative and diverse data samples. We have compared our proposed approach with three other sampling methods in terms of classification accuracies, Cohen Kappa score, which shows that our approach gives better results with comparison to other sampling methods.


Sign in / Sign up

Export Citation Format

Share Document