scholarly journals Fire safety case study of a railway tunnel: Smoke evacuation

2007 ◽  
Vol 11 (2) ◽  
pp. 207-222 ◽  
Author(s):  
Maele van ◽  
Bart Merci

When a fire occurs in a tunnel, it is of great importance to assure the safety of the occupants of the tunnel. This is achieved by creating smoke-free spaces in the tunnel through control of the smoke gases. In this paper, results are presented of a study concerning the fire safety in a real scale railway tunnel test case. Numerical simulations are performed in order to examine the possibility of natural ventilation of smoke in inclined tunnels. Several aspects are taken into account: the length of the simulated tunnel section, the slope of the tunnel and the possible effects of external wind at one portal of the tunnel. The Fire Dynamics Simulator of the National Institute of Standards and Technology, USA, is applied to perform the simulations. The simulations show that for the local behavior of the smoke during the early stages of the fire, the slope of the tunnel is of little importance. Secondly, the results show that external wind and/or pressure conditions have a large effect on the smoke gases inside the tunnel. Finally, some idea for the value of the critical ventilation velocity is given. The study also shows that computational fluid dynamics calculations are a valuable tool for large scale, real life complex fire cases. .

2018 ◽  
Vol 24 (56) ◽  
pp. 223-228
Author(s):  
Masahito KIKUCHI ◽  
Kiyoshi FUKUI ◽  
Ayako TANNO ◽  
Moyu SEIKE ◽  
Jun KITAHORI ◽  
...  

2021 ◽  
Vol 55 (1) ◽  
pp. 1-2
Author(s):  
Bhaskar Mitra

Neural networks with deep architectures have demonstrated significant performance improvements in computer vision, speech recognition, and natural language processing. The challenges in information retrieval (IR), however, are different from these other application areas. A common form of IR involves ranking of documents---or short passages---in response to keyword-based queries. Effective IR systems must deal with query-document vocabulary mismatch problem, by modeling relationships between different query and document terms and how they indicate relevance. Models should also consider lexical matches when the query contains rare terms---such as a person's name or a product model number---not seen during training, and to avoid retrieving semantically related but irrelevant results. In many real-life IR tasks, the retrieval involves extremely large collections---such as the document index of a commercial Web search engine---containing billions of documents. Efficient IR methods should take advantage of specialized IR data structures, such as inverted index, to efficiently retrieve from large collections. Given an information need, the IR system also mediates how much exposure an information artifact receives by deciding whether it should be displayed, and where it should be positioned, among other results. Exposure-aware IR systems may optimize for additional objectives, besides relevance, such as parity of exposure for retrieved items and content publishers. In this thesis, we present novel neural architectures and methods motivated by the specific needs and challenges of IR tasks. We ground our contributions with a detailed survey of the growing body of neural IR literature [Mitra and Craswell, 2018]. Our key contribution towards improving the effectiveness of deep ranking models is developing the Duet principle [Mitra et al., 2017] which emphasizes the importance of incorporating evidence based on both patterns of exact term matches and similarities between learned latent representations of query and document. To efficiently retrieve from large collections, we develop a framework to incorporate query term independence [Mitra et al., 2019] into any arbitrary deep model that enables large-scale precomputation and the use of inverted index for fast retrieval. In the context of stochastic ranking, we further develop optimization strategies for exposure-based objectives [Diaz et al., 2020]. Finally, this dissertation also summarizes our contributions towards benchmarking neural IR models in the presence of large training datasets [Craswell et al., 2019] and explores the application of neural methods to other IR tasks, such as query auto-completion.


Author(s):  
Krzysztof Jurczuk ◽  
Marcin Czajkowski ◽  
Marek Kretowski

AbstractThis paper concerns the evolutionary induction of decision trees (DT) for large-scale data. Such a global approach is one of the alternatives to the top-down inducers. It searches for the tree structure and tests simultaneously and thus gives improvements in the prediction and size of resulting classifiers in many situations. However, it is the population-based and iterative approach that can be too computationally demanding to apply for big data mining directly. The paper demonstrates that this barrier can be overcome by smart distributed/parallel processing. Moreover, we ask the question whether the global approach can truly compete with the greedy systems for large-scale data. For this purpose, we propose a novel multi-GPU approach. It incorporates the knowledge of global DT induction and evolutionary algorithm parallelization together with efficient utilization of memory and computing GPU’s resources. The searches for the tree structure and tests are performed simultaneously on a CPU, while the fitness calculations are delegated to GPUs. Data-parallel decomposition strategy and CUDA framework are applied. Experimental validation is performed on both artificial and real-life datasets. In both cases, the obtained acceleration is very satisfactory. The solution is able to process even billions of instances in a few hours on a single workstation equipped with 4 GPUs. The impact of data characteristics (size and dimension) on convergence and speedup of the evolutionary search is also shown. When the number of GPUs grows, nearly linear scalability is observed what suggests that data size boundaries for evolutionary DT mining are fading.


Author(s):  
Gianluca Bardaro ◽  
Alessio Antonini ◽  
Enrico Motta

AbstractOver the last two decades, several deployments of robots for in-house assistance of older adults have been trialled. However, these solutions are mostly prototypes and remain unused in real-life scenarios. In this work, we review the historical and current landscape of the field, to try and understand why robots have yet to succeed as personal assistants in daily life. Our analysis focuses on two complementary aspects: the capabilities of the physical platform and the logic of the deployment. The former analysis shows regularities in hardware configurations and functionalities, leading to the definition of a set of six application-level capabilities (exploration, identification, remote control, communication, manipulation, and digital situatedness). The latter focuses on the impact of robots on the daily life of users and categorises the deployment of robots for healthcare interventions using three types of services: support, mitigation, and response. Our investigation reveals that the value of healthcare interventions is limited by a stagnation of functionalities and a disconnection between the robotic platform and the design of the intervention. To address this issue, we propose a novel co-design toolkit, which uses an ecological framework for robot interventions in the healthcare domain. Our approach connects robot capabilities with known geriatric factors, to create a holistic view encompassing both the physical platform and the logic of the deployment. As a case study-based validation, we discuss the use of the toolkit in the pre-design of the robotic platform for an pilot intervention, part of the EU large-scale pilot of the EU H2020 GATEKEEPER project.


Author(s):  
Christoph Schwörer ◽  
Erika Gobet ◽  
Jacqueline F. N. van Leeuwen ◽  
Sarah Bögli ◽  
Rachel Imboden ◽  
...  

AbstractObserving natural vegetation dynamics over the entire Holocene is difficult in Central Europe, due to pervasive and increasing human disturbance since the Neolithic. One strategy to minimize this limitation is to select a study site in an area that is marginal for agricultural activity. Here, we present a new sediment record from Lake Svityaz in northwestern Ukraine. We have reconstructed regional and local vegetation and fire dynamics since the Late Glacial using pollen, spores, macrofossils and charcoal. Boreal forest composed of Pinus sylvestris and Betula with continental Larix decidua and Pinus cembra established in the region around 13,450 cal bp, replacing an open, steppic landscape. The first temperate tree to expand was Ulmus at 11,800 cal bp, followed by Quercus, Fraxinus excelsior, Tilia and Corylus ca. 1,000 years later. Fire activity was highest during the Early Holocene, when summer solar insolation reached its maximum. Carpinus betulus and Fagus sylvatica established at ca. 6,000 cal bp, coinciding with the first indicators of agricultural activity in the region and a transient climatic shift to cooler and moister conditions. Human impact on the vegetation remained initially very low, only increasing during the Bronze Age, at ca. 3,400 cal bp. Large-scale forest openings and the establishment of the present-day cultural landscape occurred only during the past 500 years. The persistence of highly diverse mixed forest under absent or low anthropogenic disturbance until the Early Middle Ages corroborates the role of human impact in the impoverishment of temperate forests elsewhere in Central Europe. The preservation or reestablishment of such diverse forests may mitigate future climate change impacts, specifically by lowering fire risk under warmer and drier conditions.


2021 ◽  
Vol 5 (1) ◽  
pp. 14
Author(s):  
Christos Makris ◽  
Georgios Pispirigos

Nowadays, due to the extensive use of information networks in a broad range of fields, e.g., bio-informatics, sociology, digital marketing, computer science, etc., graph theory applications have attracted significant scientific interest. Due to its apparent abstraction, community detection has become one of the most thoroughly studied graph partitioning problems. However, the existing algorithms principally propose iterative solutions of high polynomial order that repetitively require exhaustive analysis. These methods can undoubtedly be considered resource-wise overdemanding, unscalable, and inapplicable in big data graphs, such as today’s social networks. In this article, a novel, near-linear, and highly scalable community prediction methodology is introduced. Specifically, using a distributed, stacking-based model, which is built on plain network topology characteristics of bootstrap sampled subgraphs, the underlined community hierarchy of any given social network is efficiently extracted in spite of its size and density. The effectiveness of the proposed methodology has diligently been examined on numerous real-life social networks and proven superior to various similar approaches in terms of performance, stability, and accuracy.


2021 ◽  
Vol 256 ◽  
pp. 112338
Author(s):  
Jie Zhao ◽  
Ramona Pelich ◽  
Renaud Hostache ◽  
Patrick Matgen ◽  
Wolfgang Wagner ◽  
...  

2008 ◽  
Vol 2008 ◽  
pp. 1-9 ◽  
Author(s):  
Peter Quax ◽  
Jeroen Dierckx ◽  
Bart Cornelissen ◽  
Wim Lamotte

The explosive growth of the number of applications based on networked virtual environment technology, both games and virtual communities, shows that these types of applications have become commonplace in a short period of time. However, from a research point of view, the inherent weaknesses in their architectures are quickly exposed. The Architecture for Large-Scale Virtual Interactive Communities (ALVICs) was originally developed to serve as a generic framework to deploy networked virtual environment applications on the Internet. While it has been shown to effectively scale to the numbers originally put forward, our findings have shown that, on a real-life network, such as the Internet, several drawbacks will not be overcome in the near future. It is, therefore, that we have recently started with the development of ALVIC-NG, which, while incorporating the findings from our previous research, makes several improvements on the original version, making it suitable for deployment on the Internet as it exists today.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
Thomas Grinda ◽  
Natacha Joyon ◽  
Amélie Lusque ◽  
Sarah Lefèvre ◽  
Laurent Arnould ◽  
...  

AbstractExpression of hormone receptor (HR) for estrogens (ER) and progesterone (PR) and HER2 remains the cornerstone to define the therapeutic strategy for breast cancer patients. We aimed to compare phenotypic profiles between matched primary and metastatic breast cancer (MBC) in the ESME database, a National real-life multicenter cohort of MBC patients. Patients with results available on both primary tumour and metastatic disease within 6 months of MBC diagnosis and before any tumour progression were eligible for the main analysis. Among the 16,703 patients included in the database, 1677 (10.0%) had available biopsy results at MBC diagnosis and on matched primary tumour. The change rate of either HR or HER2 was 27.0%. Global HR status changed (from positive = either ER or PR positive, to negative = both negative; and reverse) in 14.2% of the cases (expression loss in 72.5% and gain in 27.5%). HER2 status changed in 7.8% (amplification loss in 45.2%). The discordance rate appeared similar across different biopsy sites. Metastasis to bone, HER2+ and RH+/HER2- subtypes and previous adjuvant endocrine therapy, but not relapse interval were associated with an HR discordance in multivariable analysis. Loss of HR status was significantly associated with a risk of death (HR adjusted = 1.51, p = 0.002) while gain of HR and HER2 discordance was not. In conclusion, discordance of HR and HER2 expression between primary and metastatic breast cancer cannot be neglected. In addition, HR loss is associated with worse survival. Sampling metastatic sites is essential for treatment adjustment.


2015 ◽  
Vol 2015 ◽  
pp. 1-12 ◽  
Author(s):  
Sai Kiranmayee Samudrala ◽  
Jaroslaw Zola ◽  
Srinivas Aluru ◽  
Baskar Ganapathysubramanian

Dimensionality reduction refers to a set of mathematical techniques used to reduce complexity of the original high-dimensional data, while preserving its selected properties. Improvements in simulation strategies and experimental data collection methods are resulting in a deluge of heterogeneous and high-dimensional data, which often makes dimensionality reduction the only viable way to gain qualitative and quantitative understanding of the data. However, existing dimensionality reduction software often does not scale to datasets arising in real-life applications, which may consist of thousands of points with millions of dimensions. In this paper, we propose a parallel framework for dimensionality reduction of large-scale data. We identify key components underlying the spectral dimensionality reduction techniques, and propose their efficient parallel implementation. We show that the resulting framework can be used to process datasets consisting of millions of points when executed on a 16,000-core cluster, which is beyond the reach of currently available methods. To further demonstrate applicability of our framework we perform dimensionality reduction of 75,000 images representing morphology evolution during manufacturing of organic solar cells in order to identify how processing parameters affect morphology evolution.


Sign in / Sign up

Export Citation Format

Share Document