web interfaces
Recently Published Documents


TOTAL DOCUMENTS

301
(FIVE YEARS 80)

H-INDEX

15
(FIVE YEARS 4)

2022 ◽  
pp. 58-76
Author(s):  
Gonca Gokce Menekse Dalveren ◽  
Serhat Peker

This study aims to present an exploratory study about the accessibility and usability evaluation of digital library article pages. For this purpose, four widely known digital libraries (DLs), namely Science Direct, Institute of Electric and Electronic Engineering Xplore, Association for Computing Machinery, and SpringerLink, were examined. In the first stage, article web interfaces of these selected DLs were analyzed based on standard web guidelines using automatic evaluation tools to assess their accessibility. In the second stage, to evaluate the usability of these web interfaces, eye-tracking experiments with 30 participants were conducted. Obtained results of the analysis show that article pages of digital libraries are not of free of accessibility and usability problems. Overall, this study highlights accessibility and usability problems of digital library article interfaces, and these findings can provide the feedback to web developers in making their article pages more accessible and usable for their users.


Energies ◽  
2021 ◽  
Vol 14 (23) ◽  
pp. 8171
Author(s):  
Asfandyar Khan ◽  
Arif Iqbal Umar ◽  
Arslan Munir ◽  
Syed Hamad Shirazi ◽  
Muazzam A. Khan ◽  
...  

The Internet of things (IoT) enables a diverse set of applications such as distribution automation, smart cities, wireless sensor networks, and advanced metering infrastructure (AMI). In smart grids (SGs), quality of service (QoS) and AMI traffic management need to be considered in the design of efficient AMI architectures. In this article, we propose a QoS-aware machine-learning-based framework for AMI applications in smart grids. Our proposed framework comprises a three-tier hierarchical architecture for AMI applications, a machine-learning-based hierarchical clustering approach, and a priority-based scheduling technique to ensure QoS in AMI applications in smart grids. We introduce a three-tier hierarchical architecture for AMI applications in smart grids to take advantage of IoT communication technologies and the cloud infrastructure. In this architecture, smart meters are deployed over a georeferenced area where the control center has remote access over the Internet to these network devices. More specifically, these devices can be digitally controlled and monitored using simple web interfaces such as REST APIs. We modify the existing K-means algorithm to construct a hierarchical clustering topology that employs Wi-SUN technology for bi-directional communication between smart meters and data concentrators. Further, we develop a queuing model in which different priorities are assigned to each item of the critical and normal AMI traffic based on its latency and packet size. The critical AMI traffic is scheduled first using priority-based scheduling while the normal traffic is scheduled with a first-in–first-out scheduling scheme to ensure the QoS requirements of both traffic classes in the smart grid network. The numerical results demonstrate that the target coverage and connectivity requirements of all smart meters are fulfilled with the least number of data concentrators in the design. Additionally, the numerical results show that the architectural cost is reduced, and the bottleneck problem of the data concentrator is eliminated. Furthermore, the performance of the proposed framework is evaluated and validated on the CloudSim simulator. The simulation results of our proposed framework show efficient performance in terms of CPU utilization compared to a traditional framework that uses single-hop communication from smart meters to data concentrators with a first-in–first-out scheduling scheme.


2021 ◽  
pp. 096100062110589
Author(s):  
Artemis Chaleplioglou ◽  
Alexandros Koulouris

Academic scholarly communication is the predominant business of researchers, scientists, and scholars. It is the core element of promoting scientific thought, investigation, and building up solid knowledge. The development of preprint platform web interfaces, server repositories of electronic scholarly papers submitted by their authors and openly available to the scientific community proposed a new form of academic communication. The distribution of a preprint of a scientific manuscript allows the authors to claim the priority of discovery, in a manner similar to the conference proceedings output, but also creates an anteriority that prevents protection by a patent application. Herein, we review the scope and the role of preprint papers platforms in academia, we explore individual cases, arXiv, SSRN, OSF Preprints, HAL, bioRxiv, EconStor, RePEc, PhilArchive, Research Square, viXra, Cryptology ePrint Archive, Preprints.org, ChinaXiv, medRxiv, JMIR Preprints, Authorea, ChemRxiv, engrXiv, e-LiS, SciELO, PsyArXiv, F1000 Research, and Zenodo, and discuss their significance in promoting scientific discovery, the potential risks of scientific integrity, as well as the policies of data distribution and intellectual property rights, the plus and minus, for the stakeholders, authors, institutions, states, scientific journals, scientific community, and the public. In this review we explore the scope and policies of the existing preprint papers platforms in different academic research fields.


Author(s):  
Victor T. Hayashi ◽  
Felipe V. de Almeida ◽  
Andrea E. Komo

This paper describes a Field Programmable Gate Array (FPGA) testbed that enables Bitcoin experimentation in real-time with energy consumption. The Internet of Things (IoT) infrastructure enables practical activities considering a remote lab paradigm to allow students and enthusiasts to obtain a deep understanding of Blockchain technology, considering higher cognitive domains according to the Bloom taxonomy. The proposed solution is validated with an open-source Bitcoin miner implementation in Verilog, mobile, and web interfaces for energy consumption monitoring. This testbed may be used to foster Verilog design challenges for FPGA devices that provide a suitable solution considering performance and energy consumption metrics.


2021 ◽  
Vol 11 (19) ◽  
pp. 9094
Author(s):  
Qidi Yin ◽  
Xu Zhou ◽  
Hangwei Zhang

IoT devices are exponentially increasing in all aspects of our lives. Via the web interfaces of IoT devices, attackers can control IoT devices by exploiting their vulnerabilities. In order to guarantee IoT security, testing these IoT devices to detect vulnerabilities is very important. In this work, we present FirmHunter, an automated state-aware and introspection-driven grey-box fuzzer towards Linux-based firmware images on the basis of emulation. It employs a message-state queue to overcome the dependency problem in test cases. Furthermore, it implements a scheduler collecting execution information from system introspection to drive fuzzing towards more interesting test cases, which speeds up vulnerability discovery. We evaluate FirmHunter by emulating and fuzzing eight firmware images including seven routers and one IP camera with a state-of-the-art IoT fuzzer FirmFuzz and a web application scanner ZAP. Our evaluation results show that (1) the message-state queue enables FirmHunter to parse the dependencies in test cases and find real-world vulnerabilities that other fuzzers cannot detect; (2) our scheduler accelerates the discovery of vulnerabilities by an average of 42%; and (3) FirmHunter is able to find unknown vulnerabilities.


2021 ◽  
Author(s):  
Yvonne M Bradford ◽  
Ceri E Van Slyke ◽  
Amy Singer ◽  
Holly Paddock ◽  
Anne Eagle ◽  
...  

The Zebrafish Information Network (ZFIN, zfin.org) is the central repository for zebrafish genetic and genomic data. ZFIN expertly curates, integrates, and displays zebrafish data including genes, alleles, human disease models, gene expression, phenotype, gene function, orthology, morpholino, CRISPR, TALEN, and antibodies. ZFIN makes zebrafish research data Findable, Accessible, Interoperable, and Reusable (FAIR) through nomenclature, curatorial and annotation activities, web interfaces, and data downloads. ZFIN is a founding member of the Alliance of Genome Resources, providing zebrafish data for integration into the cross species platform as well as contributing to model organism data harmonization efforts.


2021 ◽  
Author(s):  
Zexian Zeng ◽  
Cheryl J Wong ◽  
Lin Yang ◽  
Nofal Ouardaoui ◽  
Dian Li ◽  
...  

Abstract Syngeneic mouse models are tumors derived from murine cancer cells engrafted on genetically identical mouse strains. They are widely used tools for studying tumor immunity and immunotherapy response in the context of a fully functional murine immune system. Large volumes of syngeneic mouse tumor expression profiles under different immunotherapy treatments have been generated, although a lack of systematic collection and analysis makes data reuse challenging. We present Tumor Immune Syngeneic MOuse (TISMO), a database with an extensive collection of syngeneic mouse model profiles with interactive visualization features. TISMO contains 605 in vitro RNA-seq samples from 49 syngeneic cancer cell lines across 23 cancer types, of which 195 underwent cytokine treatment. TISMO also includes 1518 in vivo RNA-seq samples from 68 syngeneic mouse tumor models across 19 cancer types, of which 832 were from immune checkpoint blockade (ICB) studies. We manually annotated the sample metadata, such as cell line, mouse strain, transplantation site, treatment, and response status, and uniformly processed and quality-controlled the RNA-seq data. Besides data download, TISMO provides interactive web interfaces to investigate whether specific gene expression, pathway enrichment, or immune infiltration level is associated with differential immunotherapy response. TISMO is available at http://tismo.cistrome.org.


2021 ◽  
Author(s):  
Wenliang Zhang ◽  
Yang Liu ◽  
Zhuochao Min ◽  
Guodong Liang ◽  
Jing Mo ◽  
...  

Abstract Many circRNA transcriptome data were deposited in public resources, but these data show great heterogeneity. Researchers without bioinformatics skills have difficulty in investigating these invaluable data or their own data. Here, we specifically designed circMine (http://hpcc.siat.ac.cn/circmine and http://www.biomedical-web.com/circmine/) that provides 1 821 448 entries formed by 136 871 circRNAs, 87 diseases and 120 circRNA transcriptome datasets of 1107 samples across 31 human body sites. circMine further provides 13 online analytical functions to comprehensively investigate these datasets to evaluate the clinical and biological significance of circRNA. To improve the data applicability, each dataset was standardized and annotated with relevant clinical information. All of the 13 analytic functions allow users to group samples based on their clinical data and assign different parameters for different analyses, and enable them to perform these analyses using their own circRNA transcriptomes. Moreover, three additional tools were developed in circMine to systematically discover the circRNA–miRNA interaction and circRNA translatability. For example, we systematically discovered five potential translatable circRNAs associated with prostate cancer progression using circMine. In summary, circMine provides user-friendly web interfaces to browse, search, analyze and download data freely, and submit new data for further integration, and it can be an important resource to discover significant circRNA in different diseases.


Author(s):  
Haowen Xu ◽  
Chieh (Ross) Wang ◽  
Anne Berres ◽  
Tim LaClair ◽  
Jibonananda Sanyal

As traffic simulation software becomes more effective for realistically simulating and analyzing traffic dynamics and vehicle interactions on the mesoscopic and microscopic level, the management, dissemination, and collaborative visualization of traffic simulation results produced by individual transportation planners presents a significant challenge. Existing online content management systems have a very limited capability in allowing users to query specific traffic simulation scenarios and geospatially visualize simulation results through shareable and interactive web interfaces. This paper presents a web-based application for promoting the archiving, sharing, and visualization of large-scale traffic simulation outputs. The application is developed to enhance cyber-physical controls, communications, and public education for collaborative transportation planning. Unique features of the web application include: (a) allowing users to upload their new traffic simulation scenarios (parameters and outputs), as well as search existing scenarios using easily accessible interfaces; (b) optimizing simulation output files with heterogeneous data formats and projected coordinate systems for web-based storage and management using a scalable and searchable data/metadata standard; (c) standardizing user-uploaded simulation outputs using web interfaces and data processing libraries with parallel computing capacity; and (d) providing shareable web visual interfaces for visualizing the traffic flow and signal information stored in simulation outputs (e.g., regional traffic patterns and individual vehicle interactions) and visually comparing multiple simulation outputs both spatially and temporally. The paper presents the conceptual design and implementation of this application, and demonstrates the application’s performance for sharing, comparing, and visualizing simulation outputs from VISSIM and SUMO, two commonly used traffic simulation software programs.


Author(s):  
Sawroop Kaur ◽  
Aman Singh ◽  
G. Geetha ◽  
Xiaochun Cheng

AbstractDue to the massive size of the hidden web, searching, retrieving and mining rich and high-quality data can be a daunting task. Moreover, with the presence of forms, data cannot be accessed easily. Forms are dynamic, heterogeneous and spread over trillions of web pages. Significant efforts have addressed the problem of tapping into the hidden web to integrate and mine rich data. Effective techniques, as well as application in special cases, are required to be explored to achieve an effective harvest rate. One such special area is atmospheric science, where hidden web crawling is least implemented, and crawler is required to crawl through the huge web to narrow down the search to specific data. In this study, an intelligent hidden web crawler for harvesting data in urban domains (IHWC) is implemented to address the relative problems such as classification of domains, prevention of exhaustive searching, and prioritizing the URLs. The crawler also performs well in curating pollution-related data. The crawler targets the relevant web pages and discards the irrelevant by implementing rejection rules. To achieve more accurate results for a focused crawl, ICHW crawls the websites on priority for a given topic. The crawler has fulfilled the dual objective of developing an effective hidden web crawler that can focus on diverse domains and to check its integration in searching pollution data in smart cities. One of the objectives of smart cities is to reduce pollution. Resultant crawled data can be used for finding the reason for pollution. The crawler can help the user to search the level of pollution in a specific area. The harvest rate of the crawler is compared with pioneer existing work. With an increase in the size of a dataset, the presented crawler can add significant value to emission accuracy. Our results are demonstrating the accuracy and harvest rate of the proposed framework, and it efficiently collect hidden web interfaces from large-scale sites and achieve higher rates than other crawlers.


Sign in / Sign up

Export Citation Format

Share Document