scholarly journals Automatic Extraction of Indoor Spatial Information from Floor Plan Image: A Patch-Based Deep Learning Methodology Application on Large-Scale Complex Buildings

2021 ◽  
Vol 10 (12) ◽  
pp. 828
Author(s):  
Hyunjung Kim ◽  
Seongyong Kim ◽  
Kiyun Yu

Automatic floor plan analysis has gained increased attention in recent research. However, numerous studies related to this area are mainly experiments conducted with a simplified floor plan dataset with low resolution and a small housing scale due to the suitability for a data-driven model. For practical use, it is necessary to focus more on large-scale complex buildings to utilize indoor structures, such as reconstructing multi-use buildings for indoor navigation. This study aimed to build a framework using CNN (Convolution Neural Networks) for analyzing a floor plan with various scales of complex buildings. By dividing a floor plan into a set of normalized patches, the framework enables the proposed CNN model to process varied scale or high-resolution inputs, which is a barrier for existing methods. The model detected building objects per patch and assembled them into one result by multiplying the corresponding translation matrix. Finally, the detected building objects were vectorized, considering their compatibility in 3D modeling. As a result, our framework exhibited similar performance in detection rate (87.77%) and recognition accuracy (85.53%) to that of existing studies, despite the complexity of the data used. Through our study, the practical aspects of automatic floor plan analysis can be expanded.

2021 ◽  
Vol 11 (11) ◽  
pp. 4727
Author(s):  
Hyunjung Kim

This study proposes a technology that allows automatic extraction of vectorized indoor spatial information from raster images of floor plans. Automatic reconstruction of indoor spaces from floor plans is based on a deep learning algorithm, which trains on scanned floor plan images and extracts critical indoor elements such as room structures, junctions, walls, and openings. The newly developed technology proposed herein can handle complicated floor plans which could not be automatically extracted by previous studies because of its complexity and difficulty in being trained in deep learning. Such complicated reconstruction solely from a floor plan image can be digitized and vectorized either through manual drawing or with the help of newly developed deep learning-based automatic extraction. This study proposes an evaluation framework for assessing this newly developed technology against manual digitization. Using the analytical hierarchy process, the hierarchical aspects of technology value and their relative importance are systematically quantified. The analysis suggested that the automatic technology using a deep learning algorithm had predominant criteria followed by, substitutability, completeness, and supply and demand. In this study, the technology value of automatic floor plan analysis compared with that of traditional manual edits is compared systemically and assessed qualitatively, which had not been done in existing studies. Consequently, this study determines the effectiveness and usefulness of automatic floor plan analysis as a reasonable technology for acquiring indoor spatial information.


Author(s):  
Weiyan Chen ◽  
Fusang Zhang ◽  
Tao Gu ◽  
Kexing Zhou ◽  
Zixuan Huo ◽  
...  

Floor plan construction has been one of the key techniques in many important applications such as indoor navigation, location-based services, and emergency rescue. Existing floor plan construction methods require expensive dedicated hardware (e.g., Lidar or depth camera), and may not work in low-visibility environments (e.g., smoke, fog or dust). In this paper, we develop a low-cost Ultra Wideband (UWB)-based system (named UWBMap) that is mounted on a mobile robot platform to construct floor plan through smoke. UWBMap leverages on low-cost and off-the-shelf UWB radar, and it is able to construct an indoor map with an accuracy comparable to Lidar (i.e., the state-of-the-art). The underpinning technique is to take advantage of the mobility of radar to form virtual antennas and gather spatial information of a target. UWBMap also eliminates both robot motion noise and environmental noise to enhance weak reflection from small objects for the robust construction process. In addition, we overcome the limited view of single radar by combining multi-view from multiple radars. Extensive experiments in different indoor environments show that UWBMap achieves a map construction with a median error of 11 cm and a 90-percentile error of 26 cm, and it operates effectively in indoor scenarios with glass wall and dense smoke.


Author(s):  
Jian Gong ◽  
Xinyu Zhang ◽  
Yuanjun Huang ◽  
Ju Ren ◽  
Yaoxue Zhang

IMU based inertial tracking plays an indispensable role in many mobility centric tasks, such as robotic control, indoor navigation and virtual reality gaming. Despite its mature application in rigid machine mobility (e.g., robot and aircraft), tracking human users via mobile devices remains a fundamental challenge due to the intractable gait/posture patterns. Recent data-driven models have tackled sensor drifting, one key issue that plagues inertial tracking. However, these systems still assume the devices are held or attached to the user body with a relatively fixed posture. In practice, natural body activities may rotate/translate the device which may be mistaken as whole body movement. Such motion artifacts remain as the dominating factor that fails existing inertial tracing systems in practical uncontrolled settings. Inspired by the observation that human heads induces far less intensive movement relative to the body during walking, compared to other parts, we propose a novel multi-stage sensor fusion pipeline called DeepIT, which realizes inertial tracking by synthesizing the IMU measurements from a smartphone and an associated earbud. DeepIT introduces a data-driven reliability aware attention model, which assesses the reliability of each IMU and opportunistically synthesizes their data to mitigate the impacts of motion noise. Furthermore, DeepIT uses a reliability aware magnetometer compensation scheme to combat the angular drifting problem caused by unrestricted motion artifacts. We validate DeepIT on the first large-scale inertial navigation dataset involving both smartphone and earbud IMUs. The evaluation results show that DeepIT achieves multiple folds of accuracy improvement on the challenging uncontrolled natural walking scenarios, compared with state-of-the-art closed-form and data-driven models.


2021 ◽  
Vol 10 (3) ◽  
pp. 146
Author(s):  
Xin Fu ◽  
Hengcai Zhang ◽  
Peixiao Wang

Lacking indoor navigation graph has become a bottleneck in indoor applications and services. This paper presents a novel automated indoor navigation graph reconstruction approach from large-scale low-frequency indoor trajectories without any other data sources. The proposed approach includes three steps: trajectory simplification, 2D floor plan extraction and 3D navigation graph construction. First, we propose a ST-Join-Clustering algorithm to identify and simplify redundant stay points embedded in the indoor trajectories. Second, an indoor trajectory bitmap construction based on a self-adaptive Gaussian filter is developed, and we then propose a new improved thinning algorithm to extract 2D indoor floor plans. Finally, we present an improved CFSFDP algorithm with time constraints to identify the 3D topological connection points between two different floors. To illustrate the applicability of the proposed approach, we conducted a real-world case study using an indoor trajectory dataset of over 4000 indoor trajectories and 5 million location points. The case study results showed that the proposed approach improves the navigation network accuracy by 1.83% and the topological accuracy by 13.7% compared to the classical kernel density estimation approach.


2021 ◽  
Vol 9 (5) ◽  
pp. 1012
Author(s):  
Magdalena Zając ◽  
Magdalena Skarżyńska ◽  
Anna Lalak ◽  
Renata Kwit ◽  
Aleksandra Śmiałowska-Węglińska ◽  
...  

Reptiles are considered a reservoir of a variety of Salmonella (S.) serovars. Nevertheless, due to a lack of large-scale research, the importance of Reptilia as a Salmonella vector still remains not completely recognized. A total of 731 samples collected from reptiles and their environment were tested. The aim of the study was to assess the prevalence of Salmonella in exotic reptiles kept in Poland and to confirm Salmonella contamination of the environment after reptile exhibitions. The study included Salmonella isolation and identification, followed by epidemiological analysis of the antimicrobial resistance of the isolates. Implementation of a pathway additional to the standard Salmonella isolation protocol led to a 21% increase in the Salmonella serovars detection rate. The study showed a high occurrence of Salmonella, being the highest at 92.2% in snakes, followed by lizards (83.7%) and turtles (60.0%). The pathogen was also found in 81.2% of swabs taken from table and floor surfaces after reptile exhibitions and in two out of three egg samples. A total of 918 Salmonella strains belonging to 207 serovars and serological variants were obtained. We have noted the serovars considered important with respect to public health, i.e., S. Enteritidis, S. Typhimurium, and S. Kentucky. The study proves that exotic reptiles in Poland are a relevant reservoir of Salmonella.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 154
Author(s):  
Marcus Walldén ◽  
Masao Okita ◽  
Fumihiko Ino ◽  
Dimitris Drikakis ◽  
Ioannis Kokkinakis

Increasing processing capabilities and input/output constraints of supercomputers have increased the use of co-processing approaches, i.e., visualizing and analyzing data sets of simulations on the fly. We present a method that evaluates the importance of different regions of simulation data and a data-driven approach that uses the proposed method to accelerate in-transit co-processing of large-scale simulations. We use the importance metrics to simultaneously employ multiple compression methods on different data regions to accelerate the in-transit co-processing. Our approach strives to adaptively compress data on the fly and uses load balancing to counteract memory imbalances. We demonstrate the method’s efficiency through a fluid mechanics application, a Richtmyer–Meshkov instability simulation, showing how to accelerate the in-transit co-processing of simulations. The results show that the proposed method expeditiously can identify regions of interest, even when using multiple metrics. Our approach achieved a speedup of 1.29× in a lossless scenario. The data decompression time was sped up by 2× compared to using a single compression method uniformly.


Author(s):  
Ekaterina Kochmar ◽  
Dung Do Vu ◽  
Robert Belfer ◽  
Varun Gupta ◽  
Iulian Vlad Serban ◽  
...  

AbstractIntelligent tutoring systems (ITS) have been shown to be highly effective at promoting learning as compared to other computer-based instructional approaches. However, many ITS rely heavily on expert design and hand-crafted rules. This makes them difficult to build and transfer across domains and limits their potential efficacy. In this paper, we investigate how feedback in a large-scale ITS can be automatically generated in a data-driven way, and more specifically how personalization of feedback can lead to improvements in student performance outcomes. First, in this paper we propose a machine learning approach to generate personalized feedback in an automated way, which takes individual needs of students into account, while alleviating the need of expert intervention and design of hand-crafted rules. We leverage state-of-the-art machine learning and natural language processing techniques to provide students with personalized feedback using hints and Wikipedia-based explanations. Second, we demonstrate that personalized feedback leads to improved success rates at solving exercises in practice: our personalized feedback model is used in , a large-scale dialogue-based ITS with around 20,000 students launched in 2019. We present the results of experiments with students and show that the automated, data-driven, personalized feedback leads to a significant overall improvement of 22.95% in student performance outcomes and substantial improvements in the subjective evaluation of the feedback.


2021 ◽  
Vol 10 (1) ◽  
pp. e001087
Author(s):  
Tarek F Radwan ◽  
Yvette Agyako ◽  
Alireza Ettefaghian ◽  
Tahira Kamran ◽  
Omar Din ◽  
...  

A quality improvement (QI) scheme was launched in 2017, covering a large group of 25 general practices working with a deprived registered population. The aim was to improve the measurable quality of care in a population where type 2 diabetes (T2D) care had previously proved challenging. A complex set of QI interventions were co-designed by a team of primary care clinicians and educationalists and managers. These interventions included organisation-wide goal setting, using a data-driven approach, ensuring staff engagement, implementing an educational programme for pharmacists, facilitating web-based QI learning at-scale and using methods which ensured sustainability. This programme was used to optimise the management of T2D through improving the eight care processes and three treatment targets which form part of the annual national diabetes audit for patients with T2D. With the implemented improvement interventions, there was significant improvement in all care processes and all treatment targets for patients with diabetes. Achievement of all the eight care processes improved by 46.0% (p<0.001) while achievement of all three treatment targets improved by 13.5% (p<0.001). The QI programme provides an example of a data-driven large-scale multicomponent intervention delivered in primary care in ethnically diverse and socially deprived areas.


Author(s):  
Juan Luis Pérez-Ruiz ◽  
Igor Loboda ◽  
Iván González-Castillo ◽  
Víctor Manuel Pineda-Molina ◽  
Karen Anaid Rendón-Cortés ◽  
...  

The present paper compares the fault recognition capabilities of two gas turbine diagnostic approaches: data-driven and physics-based (a.k.a. gas path analysis, GPA). The comparison takes into consideration two differences between the approaches, the type of diagnostic space and diagnostic decision rule. To that end, two stages are proposed. In the first one, a data-driven approach with an artificial neural network (ANN) that recognizes faults in the space of measurement deviations is compared with a hybrid GPA approach that employs the same type of ANN to recognize faults in the space of estimated fault parameter. Different case studies for both anomaly detection and fault identification are proposed to evaluate the diagnostic spaces. They are formed by varying the classification, type of diagnostic analysis, and deviation noise scheme. In the second stage, the original GPA is reconstructed replacing the ANN with a tolerance-based rule to make diagnostic decisions. Here, two aspects are under analysis: the comparison of GPA classification rules and whole approaches. The results reveal that for simple classifications both spaces are equally accurate for anomaly detection and fault identification. However, for complex scenarios, the data-driven approach provides on average slightly better results for fault identification. The use of a hybrid GPA with ANN for a full classification instead of an original GPA with tolerance-based rule causes an increase of 12.49% in recognition accuracy for fault identification and up to 54.39% for anomaly detection. As for the whole approach comparison, the application of a data-driven approach instead of the original GPA can lead to an improvement of 12.14% and 53.26% in recognition accuracy for fault identification and anomaly detection, respectively.


Sign in / Sign up

Export Citation Format

Share Document