scholarly journals Optimal Planning Method for Large-Scale Historical Exhibits in the Taiwan Railway Museum

2021 ◽  
Vol 11 (5) ◽  
pp. 2424
Author(s):  
Lin Pey Fan ◽  
Tzu How Chu

The curation design of cultural heritage sites, such as museums, influence the level of visitor satisfaction and the possibility of revisitation; therefore, an efficient exhibit layout is critical. The difficulty of determining the behavior of visitors and the layout of galleries means that exhibition layout is a knowledge-intensive, time-consuming process. The progressive development of machine learning provides a low-cost and highly flexible workflow in the management of museums, compared to traditional curation design. For example, the facility’s optimal layout, floor, and furniture arrangement can be obtained through the repeated adjustment of artificial intelligence algorithms within a relatively short time. In particular, an optimal planning method is indispensable for the immense and heavy trains in the railway museum. In this study, we created an innovative strategy to integrate the domain knowledge of exhibit displaying, spatial planning, and machine learning to establish a customized recommendation scheme. Guided by an interactive experience model and the morphology of point–line–plane–stereo, we obtained three aspects (visitors, objects, and space), 12 dimensions (orientation, visiting time, visual distance, centrality, main path, district, capacity, etc.), 30 physical principles, 24 suggestions, and five main procedures to implement layout patterns and templates to create an exhibit layout guide for the National Railway Museum of Taiwan, which is currently being transferred from the railway workshop for the sake of preserving the rail culture heritage. Our results are suitable and extendible to different museums by adjusting the criteria used to establish a new recommendation scheme.

2020 ◽  
Vol 52 (1) ◽  
pp. 477-508 ◽  
Author(s):  
Steven L. Brunton ◽  
Bernd R. Noack ◽  
Petros Koumoutsakos

The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from experiments, field measurements, and large-scale simulations at multiple spatiotemporal scales. Machine learning (ML) offers a wealth of techniques to extract information from data that can be translated into knowledge about the underlying fluid mechanics. Moreover, ML algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of ML for fluid mechanics. We outline fundamental ML methodologies and discuss their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experiments, and simulations. ML provides a powerful information-processing framework that can augment, and possibly even transform, current lines of fluid mechanics research and industrial applications.


BMC Genomics ◽  
2019 ◽  
Vol 20 (S11) ◽  
Author(s):  
Tianle Ma ◽  
Aidong Zhang

Abstract Background Comprehensive molecular profiling of various cancers and other diseases has generated vast amounts of multi-omics data. Each type of -omics data corresponds to one feature space, such as gene expression, miRNA expression, DNA methylation, etc. Integrating multi-omics data can link different layers of molecular feature spaces and is crucial to elucidate molecular pathways underlying various diseases. Machine learning approaches to mining multi-omics data hold great promises in uncovering intricate relationships among molecular features. However, due to the “big p, small n” problem (i.e., small sample sizes with high-dimensional features), training a large-scale generalizable deep learning model with multi-omics data alone is very challenging. Results We developed a method called Multi-view Factorization AutoEncoder (MAE) with network constraints that can seamlessly integrate multi-omics data and domain knowledge such as molecular interaction networks. Our method learns feature and patient embeddings simultaneously with deep representation learning. Both feature representations and patient representations are subject to certain constraints specified as regularization terms in the training objective. By incorporating domain knowledge into the training objective, we implicitly introduced a good inductive bias into the machine learning model, which helps improve model generalizability. We performed extensive experiments on the TCGA datasets and demonstrated the power of integrating multi-omics data and biological interaction networks using our proposed method for predicting target clinical variables. Conclusions To alleviate the overfitting problem in deep learning on multi-omics data with the “big p, small n” problem, it is helpful to incorporate biological domain knowledge into the model as inductive biases. It is very promising to design machine learning models that facilitate the seamless integration of large-scale multi-omics data and biomedical domain knowledge for uncovering intricate relationships among molecular features and clinical features.


2021 ◽  
Author(s):  
Kenneth Atz ◽  
Clemens Isert ◽  
Markus N. A. Böcker ◽  
José Jiménez-Luna ◽  
Gisbert Schneider

Many molecular design tasks benefit from fast and accurate calculations of quantum-mechanical (QM) properties. However, the computational cost of QM methods applied to drug-like molecules currently renders large-scale applications of quantum chemistry challenging. Aiming to mitigate this problem, we developed DelFTa, an open-source toolbox for the prediction of electronic properties of drug-like molecules at the density functional (DFT) level of theory, using Δ-machine-learning. Δ-Learning corrects the prediction error (Δ) of a fast but inaccurate property calculation. DelFTa employs state-of-the-art three-dimensional message-passing neural networks trained on a large dataset of QM properties. It provides access to a wide array of quantum observables on the molecular, atomic and bond levels by predicting approximations to DFT values from a low-cost semiempirical baseline. Δ-Learning outperformed its direct-learning counterpart for most of the considered QM endpoints. The results suggest that predictions for non-covalent intra- and intermolecular interactions can be extrapolated to larger biomolecular systems. The software is fully open-sourced and features documented command-line and Python APIs.


2019 ◽  
Author(s):  
Shaoqing Dai ◽  
Xiaoman Zheng ◽  
Lei Gao ◽  
Shudi Zuo ◽  
Qi Chen ◽  
...  

Abstract. High-precision prediction of large-scale forest aboveground biomass (AGB) is important but challenging on account of the uncertainty involved in the prediction process from various sources, especially the uncertainty due to non-representative sample units. Usually caused by inadequate sampling, non-representative sample units are common and can lead to geographic clusters of localities. But they cannot fully capture complex and spatially heterogeneous patterns, in which multiple environmental covariates (such as longitude, latitude, and forest structures) affect the spatial distribution of AGB. To address this challenge, we propose herein a low-cost approach that combines machine learning with spatial statistics to construct a regional AGB map from non-representative sample units. The experimental results demonstrate that the combined methods can improve the accuracy of AGB mapping in regions where only non-representative sample units are available. This work provides a useful reference for AGB remote-sensing mapping and ecological modelling in various regions of the world.


2019 ◽  
Vol 9 (11) ◽  
pp. 2389 ◽  
Author(s):  
Chengquan Zhou ◽  
Hongbao Ye ◽  
Zhifu Xu ◽  
Jun Hu ◽  
Xiaoyan Shi ◽  
...  

Leaf coverage is an indicator of plant growth rate and predicted yield, and thus it is crucial to plant-breeding research. Robust image segmentation of leaf coverage from remote-sensing images acquired by unmanned aerial vehicles (UAVs) in varying environments can be directly used for large-scale coverage estimation, and is a key component of high-throughput field phenotyping. We thus propose an image-segmentation method based on machine learning to extract relatively accurate coverage information from the orthophoto generated after preprocessing. The image analysis pipeline, including dataset augmenting, removing background, classifier training and noise reduction, generates a set of binary masks to obtain leaf coverage from the image. We compare the proposed method with three conventional methods (Hue-Saturation-Value, edge-detection-based algorithm, random forest) and a frontier deep-learning method called DeepLabv3+. The proposed method improves indicators such as Qseg, Sr, Es and mIOU by 15% to 30%. The experimental results show that this approach is less limited by radiation conditions, and that the protocol can easily be implemented for extensive sampling at low cost. As a result, with the proposed method, we recommend using red-green-blue (RGB)-based technology in addition to conventional equipment for acquiring the leaf coverage of agricultural crops.


2020 ◽  
Author(s):  
Paul J Barr ◽  
James Ryan ◽  
Nicholas C Jacobson

UNSTRUCTURED The novel coronavirus (SARS-CoV-2) and its related disease, COVID-19, are exponentially increasing across the world, yet there is still uncertainty about the clinical phenotype. Natural Language Processing (NLP) and machine learning may hold one key to quickly identify individuals at high risk for COVID-19 and understand key symptoms in its clinical manifestation and presentation. In healthcare, such data often come the medical record, yet when overburdened, clinicians may focus on documenting widely reported symptoms that appear to confirm the diagnosis of COVID-19, at the expense of infrequently reported symptoms. A comprehensive record of the clinic visit is required—an audio recording may be the answer. If done at scale, a combination of data from the EHR and recordings of clinic visits can be used to power NLP and machine learning models, quickly creating a clinical phenotype of COVID-19. We propose the creation of a pipeline from the audio/video recording of clinic visits to the clinical symptomatology model and prediction of COVID-19 infection. With vast amounts of data available, we believe a prediction model can be quickly developed that could promote the accurate screening of individuals at risk of COVID-19 and identify patient characteristics predicting a greater risk of a more severe infection. If clinical encounters are recorded and our NLP is adequately refined, then benchtop-virology will be better informed and risk of spread reduced. While recordings of clinic visits are not the panacea to this pandemic, they are a low cost option with many potential benefits that have only just begun to be explored.


2021 ◽  
Vol 256 ◽  
pp. 01036
Author(s):  
Yi Luo ◽  
Yin Zhang ◽  
Muyi Tang ◽  
Youbin Zhou ◽  
Ying Wang ◽  
...  

With the large-scale application of LCC-HVDC and VSC-HVDC in power systems, the mutually exclusive constraints (MES) appear in the optimal planning of hybrid AC/DC receiving end power grid. The MES increase the scale of the construction set of substations and the transmission lines and decrease the planning efficiency and effectiveness of the conventional method. This paper proposes a novel hybrid AC/DC receiving end grid planning method with MES. Constraint satisfaction problem (CSP) is utilized to model the set of mutually exclusive selected lines, in which mutually exclusive candidate lines are converted to mutually exclusive variables and then introduced into the planning model as constraints. After establishing the hybrid AC/DC receiving end grid planning model with MES, the backtracking search algorithm (BSA) is used to solve the optimal planning. The effectiveness of the proposed hybrid AC/DC power grid planning method with MES is verified by case studies.


It is known in the literature that ontology had been used extensively in machine learning for performance enhancement for text retrieval. It is also shown that robust ontology with detailed description of the domain knowledge will contribute to the accuracy in the retrieval. Nevertheless, we argue in some domain such as news text retrieval, building an ontology manually can be costly for a large-scale news repository and especially with the changes in content due to the dynamic events. In addition, maintenance can be a dauting task to keep up with new words that are associated with new events. This paper demonstrates the attempt to fully automate the development of an ontology for identifying the news domain and its subdomain. The ontology specification is defined based on the needs of the accuracy in retrieval. The mechanism of generating the ontology specification is defined and the results of the retrieval performance is discussed.


Sign in / Sign up

Export Citation Format

Share Document