scholarly journals CFAST - Consolidated Fire and Smoke Transport (Version 7) Volume 5: CFAST Fire Data Generator (CData)

2021 ◽  
Author(s):  
Paul A. Reneke ◽  
Richard D. Peacock ◽  
Stanley W. Gilbert ◽  
Thomas G. Cleary
2021 ◽  
Vol 22 (1) ◽  
Author(s):  
João Lobo ◽  
Rui Henriques ◽  
Sara C. Madeira

Abstract Background Three-way data started to gain popularity due to their increasing capacity to describe inherently multivariate and temporal events, such as biological responses, social interactions along time, urban dynamics, or complex geophysical phenomena. Triclustering, subspace clustering of three-way data, enables the discovery of patterns corresponding to data subspaces (triclusters) with values correlated across the three dimensions (observations $$\times$$ × features $$\times$$ × contexts). With increasing number of algorithms being proposed, effectively comparing them with state-of-the-art algorithms is paramount. These comparisons are usually performed using real data, without a known ground-truth, thus limiting the assessments. In this context, we propose a synthetic data generator, G-Tric, allowing the creation of synthetic datasets with configurable properties and the possibility to plant triclusters. The generator is prepared to create datasets resembling real 3-way data from biomedical and social data domains, with the additional advantage of further providing the ground truth (triclustering solution) as output. Results G-Tric can replicate real-world datasets and create new ones that match researchers needs across several properties, including data type (numeric or symbolic), dimensions, and background distribution. Users can tune the patterns and structure that characterize the planted triclusters (subspaces) and how they interact (overlapping). Data quality can also be controlled, by defining the amount of missing, noise or errors. Furthermore, a benchmark of datasets resembling real data is made available, together with the corresponding triclustering solutions (planted triclusters) and generating parameters. Conclusions Triclustering evaluation using G-Tric provides the possibility to combine both intrinsic and extrinsic metrics to compare solutions that produce more reliable analyses. A set of predefined datasets, mimicking widely used three-way data and exploring crucial properties was generated and made available, highlighting G-Tric’s potential to advance triclustering state-of-the-art by easing the process of evaluating the quality of new triclustering approaches.


2004 ◽  
Vol 23 (1) ◽  
pp. IV-VIII ◽  
Author(s):  
Takeyoshi Tanaka ◽  
Shigeru Yamada

Author(s):  
Chengcheng Yu ◽  
Fan Xia ◽  
Qunyan Zhang ◽  
Haixin Ma ◽  
Weining Qian ◽  
...  

Author(s):  
Emanuel Ferreira ◽  
João Paulo C. Rodrigues ◽  
Leça Coelho

Neste artigo é analisado o risco de incêndio numa instalação de tratamento de resíduos sólidos urbanos, nomeadamente ao nível da sua fossa de deposição desses resíduos. Foram realizadas simulações do desenvolvimento do incêndio usando um modelo de duas zonas, o Consolidated Model of Fire and Smoke Transport (CFAST) e um modelo de campo, o Fire Dynamics Simulator and Smokeview (FDS-SMV), ambos do National Institute of Standards and Technology (NIST), sendo os resultados analisados e discutidos.


Author(s):  
Shibnath Mukherjee ◽  
Aryya Gangopadhyay ◽  
Zhiyuan Chen

While data mining has been widely acclaimed as a technology that can bring potential benefits to organizations, such efforts may be negatively impacted by the possibility of discovering sensitive patterns, particularly in patient data. In this article the authors present an approach to identify the optimal set of transactions that, if sanitized, would result in hiding sensitive patterns while reducing the accidental hiding of legitimate patterns and the damage done to the database as much as possible. Their methodology allows the user to adjust their preference on the weights assigned to benefits in terms of the number of restrictive patterns hidden, cost in terms of the number of legitimate patterns hidden, and damage to the database in terms of the difference between marginal frequencies of items for the original and sanitized databases. Most approaches in solving the given problem found in literature are all-heuristic based without formal treatment for optimality. While in a few work, ILP has been used previously as a formal optimization approach, the novelty of this method is the extremely low cost-complexity model in contrast to the others. They implement our methodology in C and C++ and ran several experiments with synthetic data generated with the IBM synthetic data generator. The experiments show excellent results when compared to those in the literature.


Sign in / Sign up

Export Citation Format

Share Document