granularity level
Recently Published Documents


TOTAL DOCUMENTS

47
(FIVE YEARS 20)

H-INDEX

6
(FIVE YEARS 1)

Author(s):  
Mohammad Alkandari ◽  
◽  
Jassim Alfadhli ◽  
Lamis Waleed ◽  
◽  
...  

5G cellular network expects to sustain various QoS (Quality of Service) requirements and provide customers with multiple services based on their requirements. Implementing 5G networks in an IoT (Internet of Things) infrastructure can help serving the requirements of IoT devices in a 100x faster and more efficient manner. This objective can be accomplished by applying the network slicing approach, where it partitions a single physical infrastructure into multiple virtual resources that can be distributed among different devices independently. This paper merges the benefits of both the static allocation and the network slicing approach to propose a mechanism that can allocate resources efficiently among multiple customers. The allocation mechanism based on a pre-defined policy between the slice provider and the customer is to specify the attributes that will be computed before any allocation process. Network slicing is the idiosyncratic latest 5G technology which produces diverse requirements to sustain the traditional network infrastructure's adequate granularity level. The main objective of this paper is to present a simulation suite for a network consists of base stations, including clients whose probable scenarios of 5G can attain high standards of network operation plus perform a better and easier analysis of various concepts. Network slicing methodology is enhanced at blocking. Further, it was obvious that the block ratio correspondingly increased the usage of the bandwidth. Based on the results, network slicing methodology enhanced at blocking and the block ratio correspondingly increased the usage of the bandwidth.


Author(s):  
Marcus Guidoti ◽  
Carolina Sokolowicz ◽  
Felipe Simoes ◽  
Valdenar Gonçalves ◽  
Tatiana Ruschel ◽  
...  

Plazi's TreatmentBank is a research infrastructure and partner of the recent European Union-funded Biodiversity Community Integrated Knowledge Library (BiCIKL) project to provide a single knowledge portal to open, interlinked and machine-readable, findable, accessible, interoperable and reusable (FAIR) data. Plazi is liberating published biodiversity data that is trapped in so-called flat formats, such as portable document format (PDF), to increase its FAIRness. This can pose a variety of challenges for both data mining and curation of the extracted data. The automation of such a complex process requires internal organization and a well established workflow of specific steps (e.g., decoding of the PDF, extraction of data) to handle the challenges that the immense variety of graphic layouts existing in the biodiversity publishing landscape can impose. These challenges may vary according to the origin of the document: scanned documents that were not initially digital, need optical character recognition in order to be processed. Processing a document can either be an individual, one-time-only process, or a batch process, in which a template for a specific document type must be produced. Templates consist of a set of parameters that tell Plazi-dedicated software how to read and where to find key pieces of information for the extraction process, such as the related metadata. These parameters aim to improve the outcome of the data extraction process, and lead to more consistent results than manual extraction. In order to produce such templates, a set of tests and accompanying statistics are evaluated, and these same statistics are constantly checked against ongoing processing tasks in order to assess the template performance in a continuous manner. In addition to these steps that are intrinsically associated with the automated process, different granularity levels (e.g., low granularity level might consist of a treatment and its subsections versus a high granularity level that includes material citations down to named entities such as collection codes, collector, collecting date) were defined to accommodate specific needs for particular projects and user requirements. The higher the granularity level, the more thoroughly checked the resulting data is expected to be. Additionally, steps related to the quality control (qc), such as the “pre-qc”, “qc” and “extended qc” were designed and implemented to ensure data quality and enhanced data accuracy. Data on all these different stages of the processing workflow are constantly being collected and assessed in order to improve these very same stages, aiming for a more reliable and efficient operation. This is also associated with a current Data Architecture plan to move this data assessment to a cloud provider to promote real-time assessment and constant analyses of template performance and processing stages as a whole. In this talk, the steps of this entire process are explained in detail, highlighting how data are being used to improve these steps towards a more efficient, accurate, and less costly operation.


2021 ◽  
Author(s):  
Salsabila Ramadhina ◽  
Rizal Broer Bahawares ◽  
Irman Hermadi ◽  
Arif Imam Suroso ◽  
Ahmad Rodoni ◽  
...  

Author(s):  
Zhongkai Li ◽  
Wenyuan Wei

To cope with an increasingly competitive market environment, manufacturers are utilizing modular technology to guide the production process, and a vital activity in the module partition is to determine the optimal granularity levels. A modular design methodology is developed for obtaining the optimal granularity of a modularized architecture in this paper. A relationship extraction solution is executed to automatically construct the design structure matrix (DSM) from the 3D CAD assembly model. Hierarchical clustering algorithm is implemented to form a hierarchical dendrogram with different granularity levels. An improved Elbow method is proposed to determine the optimum granularity level and corresponding modularity spectrum during the dendrogram process. The computational framework for hierarchical clustering and modularization with improved Elbow assessment operators is explained. Based on a existing literature example and a jaw crusher modular design case, comparative studies are carried out to verify the effectiveness and practicality of the proposed method. The methodology is characterized by running independently on the computer in data visualization format without human involvement, and the obtained result with optimized granularity favor further modular design work.


2021 ◽  
pp. 1-18
Author(s):  
Huajun Chen ◽  
Ning Hu ◽  
Guilin Qi ◽  
Haofen Wang ◽  
Zhen Bi ◽  
...  

Abstract The early concept of knowledge graph originates from the idea of the Semantic Web, which aims at using structured graphs to model the knowledge of the world and record the relationships that exist between things. Currently publishing knowledge bases as open data on the Web has gained significant attention. In China, CIPS(Chinese Information Processing Society) launched the OpenKG in 2015 to foster the development of Chinese Open Knowledge Graphs. Unlike existing open knowledge-based programs, OpenKG chain is envisioned as a blockchain-based open knowledge infrastructure. This article introduces the first attempt at the implementation of sharing knowledge graphs on OpenKG chain, a blockchain-based trust network. We have completed the test of the underlying blockchain platform, as well as the on-chain test of OpenKG's dataset and toolset sharing as well as fine-grained knowledge crowdsourcing at the triple level. We have also proposed novel definitions: K-Point and OpenKG Token, which can be considered as a measurement of knowledge value and user value. 1033 knowledge contributors have been involved in two months of testing on the blockchain, and the cumulative number of on-chain recordings triggered by real knowledge consumers has reached 550,000 with an average daily peak value of more than 10,000. For the first time, We have tested and realized on-chain sharing of knowledge at entity/triple granularity level. At present, all operations on the datasets and toolset in OpenKG.CN, as well as the triplets in OpenBase, are recorded on the chain, and corresponding value will also be generated and assigned in a trusted mode. Via this effort, OpenKG chain looks to provide a more credible and traceable knowledge-sharing platform for the knowledge graph community.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Nicola Licheri ◽  
Vincenzo Bonnici ◽  
Marco Beccuti ◽  
Rosalba Giugno

Abstract Background Graphs are mathematical structures widely used for expressing relationships among elements when representing biomedical and biological information. On top of these representations, several analyses are performed. A common task is the search of one substructure within one graph, called target. The problem is referred to as one-to-one subgraph search, and it is known to be NP-complete. Heuristics and indexing techniques can be applied to facilitate the search. Indexing techniques are also exploited in the context of searching in a collection of target graphs, referred to as one-to-many subgraph problem. Filter-and-verification methods that use indexing approaches provide a fast pruning of target graphs or parts of them that do not contain the query. The expensive verification phase is then performed only on the subset of promising targets. Indexing strategies extract graph features at a sufficient granularity level for performing a powerful filtering step. Features are memorized in data structures allowing an efficient access. Indexing size, querying time and filtering power are key points for the development of efficient subgraph searching solutions. Results An existing approach, GRAPES, has been shown to have good performance in terms of speed-up for both one-to-one and one-to-many cases. However, it suffers in the size of the built index. For this reason, we propose GRAPES-DD, a modified version of GRAPES in which the indexing structure has been replaced with a Decision Diagram. Decision Diagrams are a broad class of data structures widely used to encode and manipulate functions efficiently. Experiments on biomedical structures and synthetic graphs have confirmed our expectation showing that GRAPES-DD has substantially reduced the memory utilization compared to GRAPES without worsening the searching time. Conclusion The use of Decision Diagrams for searching in biochemical and biological graphs is completely new and potentially promising thanks to their ability to encode compactly sets by exploiting their structure and regularity, and to manipulate entire sets of elements at once, instead of exploring each single element explicitly. Search strategies based on Decision Diagram makes the indexing for biochemical graphs, and not only, more affordable allowing us to potentially deal with huge and ever growing collections of biochemical and biological structures.


2021 ◽  
Vol 113 (7-8) ◽  
pp. 2395-2412
Author(s):  
Baudouin Dafflon ◽  
Nejib Moalla ◽  
Yacine Ouzrout

AbstractThis work aims to review literature related to the latest cyber-physical systems (CPS) for manufacturing in the revolutionary Industry 4.0 for a comprehensive understanding of the challenges, approaches, and used techniques in this domain. Different published studies on CPS for manufacturing in Industry 4.0 paradigms through 2010 to 2019 were searched and summarized. We, then, analyzed the studies at a different granularity level inspecting the title, abstract, and full text to include in the prospective study list. Out of 626 primarily extracted relevant articles, we scrutinized 78 articles as the prospective studies on CPS for manufacturing in Industry 4.0. First, we analyzed the articles’ context to identify the major components along with their associated fine-grained constituents of Industry 4.0. Then, we reviewed different studies through a number of synthesized matrices to narrate the challenges, approaches, and used techniques as the key-enablers of the CPS for manufacturing in Industry 4.0. Although the key technologies of Industry 4.0 are the CPS, Internet of Things (IoT), and Internet of Services (IoS), the human component (HC), cyber component (CC), physical component (PC), and their HC-CC, CC-PC, and HC-PC interfaces need to be standardized to achieve the success of Industry 4.0.


2021 ◽  
Vol 11 (3) ◽  
pp. 916
Author(s):  
Michal Kvet ◽  
Emil Kršák ◽  
Karol Matiaško

Current intelligent information systems require complex database approaches managing and monitoring data in a spatio-temporal manner. Many times, the core of the temporal system element is created on the relational platform. In this paper, a summary of the temporal architectures with regards to the granularity level is proposed. Object, attribute, and synchronization group perspectives are discussed. An extension of the group temporal architecture shifting the processing in the spatio-temporal level synchronization is proposed. A data reflection model is proposed to cover the transaction integrity with reflection to the data model evolving over time. It is supervised by our own Extended Temporal Log Ahead Rule, evaluating not only collisions themselves, but the data model is reflected, as well. The main emphasis is on the data retrieval process and indexing with regards to the non-reliable data. Undefined value categorization supervised by the NULL_representation data dictionary object and memory pointer layer is provided. Therefore, undefined (NULL) values can be part of the index structure. The definition and selection of the technology of the master index is proposed and discussed. It allows the index to be used as a way to identify blocks with relevant data, which is of practical importance in temporal systems where data fragmentation often occurs. The last part deals with the syntax of the Select statement extension covering temporal environment with regards on the conventional syntax reflection. Event_definition, spatial_positions, model_reflection, consistency_model, epsilon_definition, monitored_data_set, type_of_granularity, and NULL_category clauses are introduced. Impact on the performance of the data manipulation operations is evaluated in the performance section highlighting temporal architectures, Insert, Update and Select statements forming core performance characteristics.


2021 ◽  
Vol 1748 ◽  
pp. 022018
Author(s):  
Yang Fei ◽  
Shuo Dong ◽  
Li-Zhong Shi
Keyword(s):  

Author(s):  
Nourchène Elleuch Ben Ayed ◽  
Wiem Khlif ◽  
Hanêne Ben-Abdellah

The necessity of aligning an enterprise's information system (IS) model to its business process (BP) model is incontestable to the consistent analysis of the business performance. However, the main difficulty of establishing/maintaining BP-IS model alignment stems from the dissimilarities in the knowledge of the information system developers and the business process experts. To overcome these limits, the authors propose a model-driven architecture compliant methodology that helps software analysts to build an IS analysis model aligned to a given BP model. The proposed methodology allows mastering transformation from computation independent model to platform independent model. The CIM level expresses the BP, which is modelled through the standard BPMN and, at the PIM level represents the aligned IS model, which is generated as use case diagram, system sequence diagrams, and class diagram. CIM to PIM transformation accounts for the BP structural and semantic perspectives to generate an aligned IS model that respects the best-practice granularity level and the quality of UML diagrams.


Sign in / Sign up

Export Citation Format

Share Document