Dynamic Disaster Coordination System with Web based Html5 API

2015 ◽  
Vol 4 (2) ◽  
pp. 1-15
Author(s):  
Hamdi Çinal ◽  
Şeyma Taşkan ◽  
Fulya Baybaş

Estimation of the damage before and after an earthquake requires data collection and its analysis, as well. There have been many studies that performed that kind of analysis. However, the previous studies only represent a particular period of time. There is not a good infrastructure that can perform dynamic risk analysis based on the new data collected and changing circumstances. To this end, this project aims to build an infrastructure that can enable to perform a long-term up to date analysis. In the project, it was established updateable and shareable infrastructure risk analysis for the Istanbul Metropolitan Municipality Disaster Coordination Center, Disaster and Emergency Plan, with ELER (Earthquake Loss Estimation Routine) using web-based GIS tools. ELER software that performs the loss of life and damage distribution analysis enables the implementation of disaster plans according to the pre-earthquake scenarios and post-earthquake damage distribution and amplitude results. For this purpose, web-based data entry interface of the desktop software ELER is prepared and the required update of the data set is provided. Relational database between Marmara Sea Bathymetry, 3 D terrain elevation data and geology, building data was created. Web services were given the opportunity to be updated to these data online. End products will be bringing into service to users / administrators with web-based mapping software. As a result of this project; after an earthquake in Istanbul, life loss, injuries and the quantity of the damaged buildings were quantified as soon as possible. Five-level (full, heavy, medium, light, undamaged) structural damage analyzes were done, number of people who needs post-earthquake emergency shelter was identified, and the amount of economic loss was calculated. Therefore, under the coordination of IBB units, post-earthquake intervention regions, size of the damage, etc. holistic contribution were provided, rapid damage assessment after the earthquake was made with the established system and vulnerability risk of earthquakes in the quantitative environment has become interrogable.

2016 ◽  
Vol 24 (1) ◽  
pp. 93-115 ◽  
Author(s):  
Xiaoying Yu ◽  
Qi Liao

Purpose – Passwords have been designed to protect individual privacy and security and widely used in almost every area of our life. The strength of passwords is therefore critical to the security of our systems. However, due to the explosion of user accounts and increasing complexity of password rules, users are struggling to find ways to make up sufficiently secure yet easy-to-remember passwords. This paper aims to investigate whether there are repetitive patterns when users choose passwords and how such behaviors may affect us to rethink password security policy. Design/methodology/approach – The authors develop a model to formalize the password repetitive problem and design efficient algorithms to analyze the repeat patterns. To help security practitioners to analyze patterns, the authors design and implement a lightweight, Web-based visualization tool for interactive exploration of password data. Findings – Through case studies on a real-world leaked password data set, the authors demonstrate how the tool can be used to identify various interesting patterns, e.g. shorter substrings of the same type used to make up longer strings, which are then repeated to make up the final passwords, suggesting that the length requirement of password policy does not necessarily increase security. Originality/value – The contributions of this study are two-fold. First, the authors formalize the problem of password repetitive patterns by considering both short and long substrings and in both directions, which have not yet been considered in past. Efficient algorithms are developed and implemented that can analyze various repeat patterns quickly even in large data set. Second, the authors design and implement four novel visualization views that are particularly useful for exploration of password repeat patterns, i.e. the character frequency charts view, the short repeat heatmap view, the long repeat parallel coordinates view and the repeat word cloud view.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Lukman E. Mansuri ◽  
D.A. Patel

PurposeHeritage is the latent part of a sustainable built environment. Conservation and preservation of heritage is one of the United Nations' (UN) sustainable development goals. Many social and natural factors seriously threaten heritage structures by deteriorating and damaging the original. Therefore, regular visual inspection of heritage structures is necessary for their conservation and preservation. Conventional inspection practice relies on manual inspection, which takes more time and human resources. The inspection system seeks an innovative approach that should be cheaper, faster, safer and less prone to human error than manual inspection. Therefore, this study aims to develop an automatic system of visual inspection for the built heritage.Design/methodology/approachThe artificial intelligence-based automatic defect detection system is developed using the faster R-CNN (faster region-based convolutional neural network) model of object detection to build an automatic visual inspection system. From the English and Dutch cemeteries of Surat (India), images of heritage structures were captured by digital camera to prepare the image data set. This image data set was used for training, validation and testing to develop the automatic defect detection model. While validating this model, its optimum detection accuracy is recorded as 91.58% to detect three types of defects: “spalling,” “exposed bricks” and “cracks.”FindingsThis study develops the model of automatic web-based visual inspection systems for the heritage structures using the faster R-CNN. Then it demonstrates detection of defects of spalling, exposed bricks and cracks existing in the heritage structures. Comparison of conventional (manual) and developed automatic inspection systems reveals that the developed automatic system requires less time and staff. Therefore, the routine inspection can be faster, cheaper, safer and more accurate than the conventional inspection method.Practical implicationsThe study presented here can improve inspecting the built heritages by reducing inspection time and cost, eliminating chances of human errors and accidents and having accurate and consistent information. This study attempts to ensure the sustainability of the built heritage.Originality/valueFor ensuring the sustainability of built heritage, this study presents the artificial intelligence-based methodology for the development of an automatic visual inspection system. The automatic web-based visual inspection system for the built heritage has not been reported in previous studies so far.


2021 ◽  
pp. 1-11
Author(s):  
Zach Pennington ◽  
Jeff Ehresman ◽  
Andrew Schilling ◽  
James Feghali ◽  
Andrew M. Hersh ◽  
...  

OBJECTIVE Patients with spine tumors are at increased risk for both hemorrhage and venous thromboembolism (VTE). Tranexamic acid (TXA) has been advanced as a potential intervention to reduce intraoperative blood loss in this surgical population, but many fear it is associated with increased VTE risk due to the hypercoagulability noted in malignancy. In this study, the authors aimed to 1) develop a clinical calculator for postoperative VTE risk in the population with spine tumors, and 2) investigate the association of intraoperative TXA use and postoperative VTE. METHODS A retrospective data set from a comprehensive cancer center was reviewed for adult patients treated for vertebral column tumors. Data were collected on surgery performed, patient demographics and medical comorbidities, VTE prophylaxis measures, and TXA use. TXA use was classified as high-dose (≥ 20 mg/kg) or low-dose (< 20 mg/kg). The primary study outcome was VTE occurrence prior to discharge. Secondary outcomes were deep venous thrombosis (DVT) or pulmonary embolism (PE). Multivariable logistic regression was used to identify independent risk factors for VTE and the resultant model was deployed as a web-based calculator. RESULTS Three hundred fifty patients were included. The mean patient age was 57 years, 53% of patients were male, and 67% of surgeries were performed for spinal metastases. TXA use was not associated with increased VTE (14.3% vs 10.1%, p = 0.37). After multivariable analysis, VTE was independently predicted by lower serum albumin (odds ratio [OR] 0.42 per g/dl, 95% confidence interval [CI] 0.23–0.79, p = 0.007), larger mean corpuscular volume (OR 0.91 per fl, 95% CI 0.84–0.99, p = 0.035), and history of prior VTE (OR 2.60, 95% CI 1.53–4.40, p < 0.001). Longer surgery duration approached significance and was included in the final model. Although TXA was not independently associated with the primary outcome of VTE, high-dose TXA use was associated with increased odds of both DVT and PE. The VTE model showed a fair fit of the data with an area under the curve of 0.77. CONCLUSIONS In the present cohort of patients treated for vertebral column tumors, TXA was not associated with increased VTE risk, although high-dose TXA (≥ 20 mg/kg) was associated with increased odds of DVT or PE. Additionally, the web-based clinical calculator of VTE risk presented here may prove useful in counseling patients preoperatively about their individualized VTE risk.


2020 ◽  
Vol 2020 ◽  
pp. 1-17
Author(s):  
Chuang Chen ◽  
Yinhui Wang ◽  
Tao Wang ◽  
Xiaoyan Yang

Data-driven damage identification based on measurements of the structural health monitoring (SHM) system is a hot issue. In this study, based on the intrinsic mode functions (IMFs) decomposed by the empirical mode decomposition (EMD) method and the trend term fitting residual of measured data, a structural damage identification method based on Mahalanobis distance cumulant (MDC) was proposed. The damage feature vector is composed of the squared MDC values and is calculated by the segmentation data set. It makes the changes of monitoring points caused by damage accumulate as “amplification effect,” so as to obtain more damage information. The calculation method of the damage feature vector and the damage identification procedure were given. A mass-spring system with four mass points and four springs was used to simulate the damage cases. The results showed that the damage feature vector MDC can effectively identify the occurrence and location of the damage. The dynamic measurements of a prestress concrete continuous box-girder bridge were used for decomposing into IMFs and the trend term by the EMD method and the recursive algorithm autoregressive-moving average with the exogenous inputs (RARMX) method, which were used for fitting the trend term and to obtain the fitting residual. By using the first n-order IMFs and the fitting residual as the clusters for damage identification, the effectiveness of the method is also shown.


Author(s):  
Nasibah Husna Mohd Kadir ◽  
Sharifah Aliman

In the social media, product reviews contain of text, emoticon, numbers and symbols that hard to identify the text summarization. Text analytics is one of the key techniques in exploring the unstructured data. The purpose of this study is solving the unstructured data by sort and summarizes the review data through a Web-Based Text Analytics using R approach. According to the comparative table between studies in Natural Language Processing (NLP) features, it was observed that Web-Based Text Analytics using R approach can analyze the unstructured data by using the data processing package in R. It combines all the NLP features in the menu part of the text analytics process in steps and it is labeled to make it easier for users to view all the text summarization. This study uses health product review from Shaklee as the data set. The proposed approach shows the acceptable performance in terms of system features execution compared with the baseline model system.


2021 ◽  
Author(s):  
Jiyao Wang ◽  
Philippe Youkharibache ◽  
Aron Marchler-Bauer ◽  
Christopher Lanczycki ◽  
Dachuan Zhang ◽  
...  

AbstractiCn3D was originally released as a web-based 3D viewer, which allows users to create a custom view in a life-long, shortened URL to share with colleagues. Recently, iCn3D was converted to use JavaScript classes and could be used as a library to write Node.js scripts. Any interactive features in iCn3D can be converted to Node.js scripts to run in batch mode for a large data set. Currently the following Node.js script examples are available at https://github.com/ncbi/icn3d/tree/master/icn3dnode: ligand-protein interaction, protein-protein interaction, change of interactions due to residue mutations, DelPhi electrostatic potential, and solvent accessible surface area. iCn3D PNG images can also be exported in batch mode using a Python script. Other recent features of iCn3D include the alignment of multiple chains from different structures, realignment, dynamic symmetry calculation for any subsets, 2D cartoons at different levels, and interactive contact maps. iCn3D can also be used in Jupyter Notebook as described at https://pypi.org/project/icn3dpy.


2021 ◽  
Vol 108 (Supplement_7) ◽  
Author(s):  
Nandu Nair ◽  
Vasileios Kalatzis ◽  
Madhavi Gudipati ◽  
Anne Gaunt ◽  
Vishnu Machineni

Abstract Aims During the period December-2018 to November-2019 a total of 84 cases were entered on the NELA website, corresponding to HES data suggesting 392 laparotomies. This suggests a possible case acquisition of 21% prompting us to look at our data acquisition in detail. Methods Interrogation of the NELA data from January–March 2020 was done from NELA website and hospital records. Results Analysis revealed that during this period 45 patients had laparotomy recorded whereas hospital database recorded 68 laparotomies. Of the 45 cases entered on the NELA database, only 1 patient had a complete data set entered.  22 cases had 87% data entry and 22 cases had &lt;50% of the data fields completed. Firstly, we were not capturing all patients who underwent an emergency laparotomy and secondly our data entry for the patients we did report was incomplete.  This led us to engage in a quality improvement project with following measures - Conclusions We re-assessed the case ascertainment and completeness of data collection in the period April 2020 – June 2020 and case ascertainment rate increased to 54% and all the entries were complete and locked.


Author(s):  
Carlo Giacomo Prato ◽  
Shlomo Bekhor ◽  
Cristina Pronello

In the context of route choice, modeling the process that generates the set of available alternatives in the mind of the individual is a complex and not fully explored issue. Route choice behavior is influenced by variables that are observable, such as travel time and cost, and unobservable, such as attitudes, perceptions, spatial abilities, and network knowledge. In this study, attitudinal data were collected with a web-based survey addressed to individuals who habitually drive from home to work. The paper proposes a methodology to conduct a proper application of factor analysis to the route choice context and describes the preparation of an appropriate data set through measures of internal consistency and sampling adequacy. The paper shows that, for the data set obtained from the web-based survey, six latent constructs affecting driver behavior were extracted and scores of each driver on each factor were calculated.


Sign in / Sign up

Export Citation Format

Share Document