PeerJ Computer Science
Latest Publications


TOTAL DOCUMENTS

807
(FIVE YEARS 639)

H-INDEX

22
(FIVE YEARS 6)

Published By Peerj

2376-5992

2022 ◽  
Vol 8 ◽  
pp. e837
Author(s):  
Joel Pinney ◽  
Fiona Carroll ◽  
Paul Newbury

Background Human senses have evolved to recognise sensory cues. Beyond our perception, they play an integral role in our emotional processing, learning, and interpretation. They are what help us to sculpt our everyday experiences and can be triggered by aesthetics to form the foundations of our interactions with each other and our surroundings. In terms of Human-Robot Interaction (HRI), robots have the possibility to interact with both people and environments given their senses. They can offer the attributes of human characteristics, which in turn can make the interchange with technology a more appealing and admissible experience. However, for many reasons, people still do not seem to trust and accept robots. Trust is expressed as a person’s ability to accept the potential risks associated with participating alongside an entity such as a robot. Whilst trust is an important factor in building relationships with robots, the presence of uncertainties can add an additional dimension to the decision to trust a robot. In order to begin to understand how to build trust with robots and reverse the negative ideology, this paper examines the influences of aesthetic design techniques on the human ability to trust robots. Method This paper explores the potential that robots have unique opportunities to improve their facilities for empathy, emotion, and social awareness beyond their more cognitive functionalities. Through conducting an online questionnaire distributed globally, we explored participants ability and acceptance in trusting the Canbot U03 robot. Participants were presented with a range of visual questions which manipulated the robot’s facial screen and asked whether or not they would trust the robot. A selection of questions aimed at putting participants in situations where they were required to establish whether or not to trust a robot’s responses based solely on the visual appearance. We accomplished this by manipulating different design elements of the robots facial and chest screens, which influenced the human-robot interaction. Results We found that certain facial aesthetics seem to be more trustworthy than others, such as a cartoon face versus a human face, and that certain visual variables (i.e., blur) afforded uncertainty more than others. Consequentially, this paper reports that participant’s uncertainties of the visualisations greatly influenced their willingness to accept and trust the robot. The results of introducing certain anthropomorphic characteristics emphasised the participants embrace of the uncanny valley theory, where pushing the degree of human likeness introduced a thin line between participants accepting robots and not. By understanding what manipulation of design elements created the aesthetic effect that triggered the affective processes, this paper further enriches our knowledge of how we might design for certain emotions, feelings, and ultimately more socially acceptable and trusting robotic experiences.


2022 ◽  
Vol 8 ◽  
pp. e835
Author(s):  
David Schindler ◽  
Felix Bensmann ◽  
Stefan Dietze ◽  
Frank Krüger

Science across all disciplines has become increasingly data-driven, leading to additional needs with respect to software for collecting, processing and analysing data. Thus, transparency about software used as part of the scientific process is crucial to understand provenance of individual research data and insights, is a prerequisite for reproducibility and can enable macro-analysis of the evolution of scientific methods over time. However, missing rigor in software citation practices renders the automated detection and disambiguation of software mentions a challenging problem. In this work, we provide a large-scale analysis of software usage and citation practices facilitated through an unprecedented knowledge graph of software mentions and affiliated metadata generated through supervised information extraction models trained on a unique gold standard corpus and applied to more than 3 million scientific articles. Our information extraction approach distinguishes different types of software and mentions, disambiguates mentions and outperforms the state-of-the-art significantly, leading to the most comprehensive corpus of 11.8 M software mentions that are described through a knowledge graph consisting of more than 300 M triples. Our analysis provides insights into the evolution of software usage and citation patterns across various fields, ranks of journals, and impact of publications. Whereas, to the best of our knowledge, this is the most comprehensive analysis of software use and citation at the time, all data and models are shared publicly to facilitate further research into scientific use and citation of software.


2022 ◽  
Vol 8 ◽  
pp. e834
Author(s):  
Sara Mejahed ◽  
M Elshrkawey

The demand for virtual machine requests has increased recently due to the growing number of users and applications. Therefore, virtual machine placement (VMP) is now critical for the provision of efficient resource management in cloud data centers. The VMP process considers the placement of a set of virtual machines onto a set of physical machines, in accordance with a set of criteria. The optimal solution for multi-objective VMP can be determined by using a fitness function that combines the objectives. This paper proposes a novel model to enhance the performance of the VMP decision-making process. Placement decisions are made based on a fitness function that combines three criteria: placement time, power consumption, and resource wastage. The proposed model aims to satisfy minimum values for the three objectives for placement onto all available physical machines. To optimize the VMP solution, the proposed fitness function was implemented using three optimization algorithms: particle swarm optimization with Lévy flight (PSOLF), flower pollination optimization (FPO), and a proposed hybrid algorithm (HPSOLF-FPO). Each algorithm was tested experimentally. The results of the comparative study between the three algorithms show that the hybrid algorithm has the strongest performance. Moreover, the proposed algorithm was tested against the bin packing best fit strategy. The results show that the proposed algorithm outperforms the best fit strategy in total server utilization.


2022 ◽  
Vol 8 ◽  
pp. e801
Author(s):  
Bello Musa Yakubu ◽  
Rabia Latif ◽  
Aisha Yakubu ◽  
Majid Iqbal Khan ◽  
Auwal Ibrahim Magashi

The increasing number of rice product safety issues and the potential for contamination have established an enormous need for an effective strategy for the traceability of the rice supply chain. Tracing the origins of a rice product from raw materials to end customers is very complex and costly. Existing food supply chain methods (for example, rice) do not provide a scalable and cost-effective means of agricultural food supply. Besides, consumers lack the capability and resources required to check or report on the quality of agricultural goods in terms of defects or contamination. Consequently, customers are forced to decide whether to utilize or discard the goods. However, blockchain is an innovative framework capable of offering a transformative solution for the traceability of agricultural products and food supply chains. The aim of this paper is to propose a framework capable of tracking and monitoring all interactions and transactions between all stakeholders in the rice chain ecosystem through smart contracts. The model incorporates a system for customer satisfaction feedback, which enables all stakeholders to get up-to-date information on product quality, enabling them to make more informed supply chain decisions. Each transaction is documented and stored in the public ledger of the blockchain. The proposed framework provides a safe, efficient, reliable, and effective way to monitor and track rice products safety and quality especially during product purchasing. The security and performance analysis results shows that the proposed framework outperform the benchmark techniques in terms of cost-effectiveness, security and scalability with low computational overhead.


2022 ◽  
Vol 8 ◽  
pp. e788
Author(s):  
Victor Ponce ◽  
Bessam Abdulrazak

The current generation of connected devices and the Internet of Things augment people’s capabilities through ambient intelligence. Ambient Intelligence (AmI) support systems contain applications consuming available services in the environment to serve users. A well-known design of these applications follows a service architecture style and implement artificial intelligence mechanisms to maintain an awareness of the context: The service architecture style enables the distribution of capabilities and facilitates interoperability. Intelligence and context-awareness provide an adaptation of the environment to improve the interaction. Smart objects in distributed deployments and the increasing machine awareness of devices and people context also lead us to architectures, including self-governed policies providing self-service. We have systematically reviewed and analyzed ambient system governance considering service-oriented architecture (SOA) as a reference model. We applied a systematic mapping process obtaining 198 papers for screening (out of 712 obtained after conducting searches in research databases). We then reviewed and categorized 68 papers related to 48 research projects selected by fulfilling ambient intelligence and SOA principles and concepts. This paper presents the result of our analysis, including the existing governance designs, the distribution of adopted characteristics, and the trend to incorporate service in the context-aware process. We also discuss the identified challenges and analyze research directions.


2022 ◽  
Vol 8 ◽  
pp. e820
Author(s):  
Hafiza Anisa Ahmed ◽  
Anum Hameed ◽  
Narmeen Zakaria Bawany

The expeditious growth of the World Wide Web and the rampant flow of network traffic have resulted in a continuous increase of network security threats. Cyber attackers seek to exploit vulnerabilities in network architecture to steal valuable information or disrupt computer resources. Network Intrusion Detection System (NIDS) is used to effectively detect various attacks, thus providing timely protection to network resources from these attacks. To implement NIDS, a stream of supervised and unsupervised machine learning approaches is applied to detect irregularities in network traffic and to address network security issues. Such NIDSs are trained using various datasets that include attack traces. However, due to the advancement in modern-day attacks, these systems are unable to detect the emerging threats. Therefore, NIDS needs to be trained and developed with a modern comprehensive dataset which contains contemporary common and attack activities. This paper presents a framework in which different machine learning classification schemes are employed to detect various types of network attack categories. Five machine learning algorithms: Random Forest, Decision Tree, Logistic Regression, K-Nearest Neighbors and Artificial Neural Networks, are used for attack detection. This study uses a dataset published by the University of New South Wales (UNSW-NB15), a relatively new dataset that contains a large amount of network traffic data with nine categories of network attacks. The results show that the classification models achieved the highest accuracy of 89.29% by applying the Random Forest algorithm. Further improvement in the accuracy of classification models is observed when Synthetic Minority Oversampling Technique (SMOTE) is applied to address the class imbalance problem. After applying the SMOTE, the Random Forest classifier showed an accuracy of 95.1% with 24 selected features from the Principal Component Analysis method.


2022 ◽  
Vol 8 ◽  
pp. e810
Author(s):  
Abdallah Qusef ◽  
Hamzeh Alkilani

The Internet’s emergence as a global communication medium has dramatically expanded the volume of content that is freely accessible. Through using this information, open-source intelligence (OSINT) seeks to meet basic intelligence requirements. Although open-source information has historically been synonymous with strategic intelligence, today’s consumers range from governments to corporations to everyday people. This paper aimed to describe open-source intelligence and to show how to use a few OSINT resources. In this article, OSINT (a combination of public information, social engineering, open-source information, and internet information) was examined to define the present situation further, and suggestions were made as to what could happen in the future. OSINT is gaining prominence, and its application is spreading into different areas. The primary difficulty with OSINT is separating relevant bits from large volumes of details. Thus, this paper proposed and illustrated three OSINT alternatives, demonstrating their existence and distinguishing characteristics. The solution analysis took the form of a presentation evaluation, during which the usage and effects of selected OSINT solutions were reported and observed. The paper’s results demonstrate the breadth and dispersion of OSINT solutions. The mechanism by which OSINT data searches are returned varies greatly between solutions. Combining data from numerous OSINT solutions to produce a detailed summary and interpretation involves work and the use of multiple disjointed solutions, both of which are manual. Visualization of results is anticipated to be a potential theme in the production of OSINT solutions. Individuals’ data search and analysis abilities are another trend worth following, whether to optimize the productivity of currently accessible OSINT solutions or to create more advanced OSINT solutions in the future.


2022 ◽  
Vol 8 ◽  
pp. e852
Author(s):  
Zhihua Li ◽  
Meini Pan ◽  
Lei Yu

The unbalanced resource utilization of physical machines (PMs) in cloud data centers could cause resource wasting, workload imbalance and even negatively impact quality of service (QoS). To address this problem, this paper proposes a multi-resource collaborative optimization control (MCOC) mechanism for virtual machine (VM) migration. It uses Gaussian model to adaptively estimate the probability that the running PMs are in the multi-resource utilization balance status. Given the estimated probability of the multi-resource utilization balance state, we propose effective selection algorithms for live VM migration between the source hosts and destination hosts, including adaptive Gaussian model-based VMs placement (AGM-VMP) algorithm and VMs consolidation (AGM-VMC) method. Experimental results show that the AGM-VMC method can effectively achieve load balance and significantly improve resource utilization, reduce data center energy consumption while guaranteeing QoS.


2022 ◽  
Vol 8 ◽  
pp. e843
Author(s):  
Murat Hacimurtazaoglu ◽  
Kemal Tutuncu

Background In terms of data-hiding areas, video steganography is more advantageous compared to other steganography techniques since it uses video as its cover medium. For any video steganography, the good trade-off among robustness, imperceptibility, and payload must be created and maintained. Even though it has the advantage of capacity, video steganography has the robustness problem especially regarding spatial domain is used to implement it. Transformation operations and statistical attacks can harm secret data. Thus, the ideal video steganography technique must provide high imperceptibility, high payload, and resistance towards visual, statistical and transformation-based steganalysis attacks. Methods One of the most common spatial methods for hiding data within the cover medium is the Least Significant Bit (LSB) method. In this study, an LSB-based video steganography application that uses a poly-pattern key block matrix (KBM) as the key was proposed. The key is a 64 × 64 pixel block matrix that consists of 16 sub-pattern blocks with a pixel size of 16 × 16. To increase the security of the proposed approach, sub-patterns in the KBM are allowed to shift in four directions and rotate up to 270° depending on the user preference and logical operations. For additional security XOR and AND logical operations were used to determine whether to choose the next predetermined 64 × 64 pixel block or jump to another pixel block in the cover video frame to place a KBM to embed the secret data. The fact that the combination of variable KBM structure and logical operator for the secret data embedding distinguishes the proposed algorithm from previous video steganography studies conducted with LSB-based approaches. Results Mean Squared Error (MSE), Structural Similarity Index (SSIM) and Peak Signal-to-Noise Ratio (PSNR) parameters were calculated for the detection of the imperceptibility (or the resistance against visual attacks ) of the proposed algorithm. The proposed algorithm obtained the best MSE, SSIM and PSNR parameter values based on the secret message length as 0.00066, 0.99999, 80.01458 dB for 42.8 Kb of secret message and 0.00173, 0.99999, 75.72723 dB for 109 Kb of secret message, respectively. These results are better than the results of classic LSB and the studies conducted with LSB-based video steganography approaches in the literature. Since the proposed system allows an equal amount of data embedding in each video frame the data loss will be less in transformation operations. The lost data can be easily obtained from the entire text with natural language processing. The variable structure of the KBM, logical operators and extra security preventions makes the proposed system be more secure and complex. This increases the unpredictability and resistance against statistical attacks. Thus, the proposed method provides high imperceptibility and resistance towards visual, statistical and transformation-based attacks while acceptable even high payload.


2022 ◽  
Vol 8 ◽  
pp. e790
Author(s):  
Zsigmond Benkő ◽  
Marcell Stippinger ◽  
Roberta Rehus ◽  
Attila Bencze ◽  
Dániel Fabó ◽  
...  

Data dimensionality informs us about data complexity and sets limit on the structure of successful signal processing pipelines. In this work we revisit and improve the manifold adaptive Farahmand-Szepesvári-Audibert (FSA) dimension estimator, making it one of the best nearest neighbor-based dimension estimators available. We compute the probability density function of local FSA estimates, if the local manifold density is uniform. Based on the probability density function, we propose to use the median of local estimates as a basic global measure of intrinsic dimensionality, and we demonstrate the advantages of this asymptotically unbiased estimator over the previously proposed statistics: the mode and the mean. Additionally, from the probability density function, we derive the maximum likelihood formula for global intrinsic dimensionality, if i.i.d. holds. We tackle edge and finite-sample effects with an exponential correction formula, calibrated on hypercube datasets. We compare the performance of the corrected median-FSA estimator with kNN estimators: maximum likelihood (Levina-Bickel), the 2NN and two implementations of DANCo (R and MATLAB). We show that corrected median-FSA estimator beats the maximum likelihood estimator and it is on equal footing with DANCo for standard synthetic benchmarks according to mean percentage error and error rate metrics. With the median-FSA algorithm, we reveal diverse changes in the neural dynamics while resting state and during epileptic seizures. We identify brain areas with lower-dimensional dynamics that are possible causal sources and candidates for being seizure onset zones.


Sign in / Sign up

Export Citation Format

Share Document