Hybrid C&C-DMA Scheme For Multi-User Access in Large-Scale VLC Network

Author(s):  
Minglei Huang ◽  
Zhitong Huang ◽  
Yu Xiao ◽  
Yuefeng Ji
Keyword(s):  
1999 ◽  
Vol 3 (1) ◽  
pp. 53-60
Author(s):  
Kristi Yuthas ◽  
Dennis F. Togo

In this era of massive data accumulation, dynamic development of large-scale data-bases and interfaces intended to be user-friendly, there is still an increasing demand on analysts as actual user access to databases is still not a common practice. A data dictionary approach, that includes providing users with a list of relevant data items within the database, can expedite the analysis of information requirements and the development of user-requested information systems. Furthermore, this approach enhances user involvement and reduces the demands on the analysts for systems devel-opment projects.


2015 ◽  
Vol 137 (7) ◽  
Author(s):  
Cory R. Schaffhausen ◽  
Timothy M. Kowalewski

Understanding user needs and preferences is increasingly recognized as a critical component of early stage product development. The large-scale needfinding methods in this series of studies attempt to overcome shortcomings with existing methods, particularly in environments with limited user access. The three studies evaluated three specific types of stimuli to help users describe higher quantities of needs. Users were trained on need statements and then asked to enter as many need statements and optional background stories as possible. One or more stimulus types were presented, including prompts (a type of thought exercise), shared needs, and shared context images. Topics used were general household areas including cooking, cleaning, and trip planning. The results show that users can articulate a large number of needs unaided, and users consistently increased need quantity after viewing a stimulus. A final study collected 1735 needs statements and 1246 stories from 402 individuals in 24 hr. Shared needs and images significantly increased need quantity over other types. User experience (and not expertise) was a significant factor for increasing quantity, but may not warrant exclusive use of high-experience users in practice.


2018 ◽  
Vol 14 (9) ◽  
pp. 155014771880153 ◽  
Author(s):  
László Viktor Jánoky ◽  
János Levendovszky ◽  
Péter Ekler

JSON Web Tokens provide a scalable solution with significant performance benefits for user access control in decentralized, large-scale distributed systems. Such examples would entail cloud-based, micro-services styled systems or typical Internet of Things solutions. One of the obstacles still preventing the wide-spread use of JSON Web Token–based access control is the problem of invalidating the issued tokens upon clients leaving the system. Token invalidation presently takes a considerable processing overhead or a drastically increased architectural complexity. Solving this problem without losing the main benefits of JSON Web Tokens still remains an open challenge which will be addressed in the article. We are going to propose some solutions to implement low-complexity token revocations and compare their characteristics in different environments with the traditional solutions. The proposed solutions have the benefit of preserving the advantages of JSON Web Tokens, while also adhering to stronger security constraints and possessing a finely tuneable performance cost.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 381 ◽  
Author(s):  
Tariq Daouda ◽  
Claude Perreault ◽  
Sébastien Lemieux

pyGeno is a Python package mainly intended for precision medicine applications that revolve around genomics and proteomics. It integrates reference sequences and annotations from Ensembl, genomic polymorphisms from the dbSNP database and data from next-gen sequencing into an easy to use, memory-efficient and fast framework, therefore allowing the user to easily explore subject-specific genomes and proteomes. Compared to a standaloneprogram, pyGeno gives the user access to the complete expressivity of Python, a general programming language. Its range of application therefore encompasses both short scripts and large scale genome-wide studies.


2018 ◽  
Vol 34 (1) ◽  
pp. 45-59
Author(s):  
Cory Lampert

Purpose Digital library managers are faced with growing pressure to digitize materials efficiently on a larger scale. This paper aims to address the staffing and other resources needed to evolve smaller scope operations into teams capable of outputting larger-scale production. Design/methodology/approach Much of the current literature focuses on philosophy of these projects and issues of metadata and user access. In contrast, this article seeks to supply the much-needed practical information for digital library managers who need to take immediate action to meet new mandates and reach higher target goals within the constraints of limited resources. Findings The author will provide an overview of resources needed to increase digitization output and provide an analysis of three key resources that can be targeted by digital library managers in a range of environments. These resources will be examined with practical advice given on how new staffing configurations, outsourcing of materials and high-efficiency equipment can be implemented in phases. Originality/value This paper examines the gap between smaller-scale digitization and successful large-scale projects, and offers several possible scenarios for organizations to consider as they choose to move forward in a way that suits their goals. The focus of this study is neither on the rationale for large-scale digitization nor on the detailed specifications for large-scale digitization workflows. Rather, it will outline the types of resources (internal and external), decision points and specific practical strategies for digital library managers seeking to start ramping up the production.


2020 ◽  
Vol 69 (1) ◽  
pp. 314-318
Author(s):  
B.A. Doszhanov ◽  

Along with the socio-economic sphere, interest in Internet technologies in public-private partnerships, companies and institutions, organizations allows you to integrate blockchains into large infrastructure. This result can be achieved by giving the user access and changing internal database storage algorithms. The main feature of blockchain technology is that it is based on decentralization. In the case of using any means of protection, if it is possible to crack the database located on the server, then such a malfunction is not allowed in the blockchain. The article discusses the concepts of cryptocurrency, blockchain, and bitcoin, as well as the principles of their operation. Prospects for their development and large-scale impact on financial activities carried out via the Internet are presented. It also describes cryptocurrencies and the technologies and algorithms used in them.


2021 ◽  
Vol 5 (1) ◽  
pp. 1-7
Author(s):  
Odila Abduraimovna Islamova ◽  
Zoya Sergeevna Chay ◽  
Feruza Saidovna Rakhimova ◽  
Feruza Saydaxmatovna Abdullayeva

This work belongs to the field of limit theorems for separable statistics. In particular, this paper considers the number of empty cells after placing particles in a finite number of cells, where each particle is placed in a polynomial scheme. The statistics under consideration belong to the class of separable statistics, which were previously considered in (Mirakhmedov: 1985), where necessary statements for the layout of particles in a countable number of cells were proved. The same scheme was considered in (Asimov: 1982), in which the conditions for the asymptotic normality of random variables were established. In this paper, the asymptotic normality of the statistics in question is proved and an estimate of the remainder term in the central limit theorem is obtained. In summary, the demand for separable statistics systems is growing day by day to address large-scale databases or to facilitate user access to data management. Because such systems are not only used for data entry and storage, they also describe their structure: file collection supports logical consistency; provides data processing language; restores data after various interruptions; database management systems allow multiple users.


1997 ◽  
Vol 08 (03) ◽  
pp. 347-361 ◽  
Author(s):  
Burkhard Monien ◽  
Ralf Diekmann ◽  
Reinhard Lüling

Reconfigurable communication networks for massively parallel multiprocessor systems offer the possibility to realize a number of application demands like special communication patterns or real-time requirements. This paper presents the design principle of a reconfigurable network which is able to realize any graph of maximal degree four. The architecture is based on a special multistage Clos network, constructed out of a number of static routing switches of equal size. Upper bounds on the cut size of 4-regular graphs, if split into a number of clusters, allow minimizing the number of switches and connections while still offering the desired reconfiguration capabilities as well as large scalability and flexible multi-user access. Efficient algorithms configuring the architecture are based on an old result by Petersen27 about the decomposition of regular graphs. The concept presented here is the basis for the Parsytec SC series of reconfigurable MPP-systems. The currently largest realization with 320 processors is presented in greater detail.


F1000Research ◽  
2016 ◽  
Vol 5 ◽  
pp. 381 ◽  
Author(s):  
Tariq Daouda ◽  
Claude Perreault ◽  
Sébastien Lemieux

pyGeno is a python package mainly intended for precision medicine applications that revolve around genomics and proteomics. It integrates reference sequences and annotations from Ensembl, genomic polymorphisms from the dbSNP database and data from next-gen sequencing into an easy to use, memory-efficient and fast framework, therefore allowing the user to easily explore subject-specific genomes and proteomes. Compared to a standalone program, pyGeno gives the user access to the complete expressivity of python, a general programming language. Its range of application therefore encompasses both short scripts and large scale genome-wide studies.


Sign in / Sign up

Export Citation Format

Share Document