scholarly journals Using a virtual machine environment for developing, testing, and training for the UM-UKCA composition-climate model, using Unified Model version 10.9 and above

2018 ◽  
Vol 11 (9) ◽  
pp. 3647-3657 ◽  
Author(s):  
Nathan Luke Abraham ◽  
Alexander T. Archibald ◽  
Paul Cresswell ◽  
Sam Cusworth ◽  
Mohit Dalvi ◽  
...  

Abstract. The Met Office Unified Model (UM) is a state-of-the-art weather and climate model that is used operationally worldwide. UKCA is the chemistry and aerosol sub model of the UM that enables interactive composition and physical atmosphere interactions, but which adds an additional 120 000 lines of code to the model. Ensuring that the UM code and UM-UKCA (the UM running with interactive chemistry and aerosols) is well tested is thus essential. While a comprehensive test harness is in place at the Met Office and partner sites to aid in development, this is not available to many UM users. Recently, the Met Office have made available a virtual machine environment that can be used to run the UM on a desktop or laptop PC. Here we describe the development of a UM-UKCA configuration that is able to run within this virtual machine while only needing 6 GB of memory, before discussing the applications of this system for model development, testing, and training.

2018 ◽  
Author(s):  
Nathan Luke Abraham ◽  
Alexander T. Archibald ◽  
Paul Cresswell ◽  
Sam Cusworth ◽  
Mohit Dalvi ◽  
...  

Abstract. The Met Office Unified Model (UM) is a state-of-the-art weather and climate model that is used operationally worldwide. UKCA is the chemistry and aerosol sub model of the UM that enables interactive composition and physical atmosphere interactions, but which adds an additional 120,000 lines of code to the model. Ensuring that the UM code and UM-UKCA (the UM running with interactive chemistry and aerosols) is well tested is therefore essential. While a comprehensive test harness is in place at the Met Office and partner sites to aid in development, this is not available to many UM users. Recently the Met Office have made available a Virtual Machine environment that can be used to run the UM on a desktop or laptop PC. Here we describe the development of a UM-UKCA configuration that is able to run within this Virtual Machine while only needing 6 GB of memory, before discussing the applications of this system for model development, testing, and training.


2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

<p>Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are a constant subject to changes, thanks to advances in computer systems, numerical methods, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today's models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software upgrades. Such changes are meant to preserve the model formulation, yet the verification of such changes is challenged by the chaotic nature of our atmosphere - any small change, even rounding errors, can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework.</p><p>Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using ensemble simulations in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different hypothesis tests.</p><p>We present first applications of the methodology with the regional weather and climate model COSMO. The changes considered include a major system upgrade of the supercomputer used, the change from double to single precision floating-point representation, changes in the update frequency of the lateral boundary conditions, and tiny changes to selected model parameters. While providing very robust results, the methodology also shows a large sensitivity to more significant model changes, making it a good candidate for an automated tool to guarantee model consistency in the development cycle.</p>


2021 ◽  
Author(s):  
Christian Zeman ◽  
Christoph Schär

Abstract. Since their first operational application in the 1950s, atmospheric numerical models have become essential tools in weather and climate prediction. As such, they are subject to continuous changes, thanks to advances in computer systems, numerical methods, more and better observations, and the ever increasing knowledge about the atmosphere of Earth. Many of the changes in today’s models relate to seemingly unsuspicious modifications, associated with minor code rearrangements, changes in hardware infrastructure, or software updates. Such changes are not supposed to significantly affect the model. However, this is difficult to verify, because our atmosphere is a chaotic system, where even a tiny change can have a big impact on individual simulations. Overall this represents a serious challenge to a consistent model development and maintenance framework. Here we propose a new methodology for quantifying and verifying the impacts of minor atmospheric model changes, or its underlying hardware/software system, by using a set of simulations with slightly different initial conditions in combination with a statistical hypothesis test. The methodology can assess effects of model changes on almost any output variable over time, and can also be used with different underlying statistical hypothesis tests. We present first applications of the methodology with a regional weather and climate model, including the verification of a major system update of the underlying supercomputer. While providing very robust results, the methodology shows a great sensitivity even to tiny changes. Results show that changes are often only detectable during the first hours, which suggests that short-term simulations (days to months) are best suited for the methodology, even when addressing long-term climate simulations. We also show that the choice of the underlying statistical hypothesis test is not of importance and that the methodology already works well for coarse resolutions, making it computationally inexpensive and therefore an ideal candidate for automated testing.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Matvey Ezhov ◽  
Maxim Gusarev ◽  
Maria Golitsyna ◽  
Julian M. Yates ◽  
Evgeny Kushnerev ◽  
...  

AbstractIn this study, a novel AI system based on deep learning methods was evaluated to determine its real-time performance of CBCT imaging diagnosis of anatomical landmarks, pathologies, clinical effectiveness, and safety when used by dentists in a clinical setting. The system consists of 5 modules: ROI-localization-module (segmentation of teeth and jaws), tooth-localization and numeration-module, periodontitis-module, caries-localization-module, and periapical-lesion-localization-module. These modules use CNN based on state-of-the-art architectures. In total, 1346 CBCT scans were used to train the modules. After annotation and model development, the AI system was tested for diagnostic capabilities of the Diagnocat AI system. 24 dentists participated in the clinical evaluation of the system. 30 CBCT scans were examined by two groups of dentists, where one group was aided by Diagnocat and the other was unaided. The results for the overall sensitivity and specificity for aided and unaided groups were calculated as an aggregate of all conditions. The sensitivity values for aided and unaided groups were 0.8537 and 0.7672 while specificity was 0.9672 and 0.9616 respectively. There was a statistically significant difference between the groups (p = 0.032). This study showed that the proposed AI system significantly improved the diagnostic capabilities of dentists.


2021 ◽  
Vol 13 (10) ◽  
pp. 1985
Author(s):  
Emre Özdemir ◽  
Fabio Remondino ◽  
Alessandro Golkar

With recent advances in technologies, deep learning is being applied more and more to different tasks. In particular, point cloud processing and classification have been studied for a while now, with various methods developed. Some of the available classification approaches are based on specific data source, like LiDAR, while others are focused on specific scenarios, like indoor. A general major issue is the computational efficiency (in terms of power consumption, memory requirement, and training/inference time). In this study, we propose an efficient framework (named TONIC) that can work with any kind of aerial data source (LiDAR or photogrammetry) and does not require high computational power while achieving accuracy on par with the current state of the art methods. We also test our framework for its generalization ability, showing capabilities to learn from one dataset and predict on unseen aerial scenarios.


2020 ◽  
Vol 34 (05) ◽  
pp. 8600-8607
Author(s):  
Haiyun Peng ◽  
Lu Xu ◽  
Lidong Bing ◽  
Fei Huang ◽  
Wei Lu ◽  
...  

Target-based sentiment analysis or aspect-based sentiment analysis (ABSA) refers to addressing various sentiment analysis tasks at a fine-grained level, which includes but is not limited to aspect extraction, aspect sentiment classification, and opinion extraction. There exist many solvers of the above individual subtasks or a combination of two subtasks, and they can work together to tell a complete story, i.e. the discussed aspect, the sentiment on it, and the cause of the sentiment. However, no previous ABSA research tried to provide a complete solution in one shot. In this paper, we introduce a new subtask under ABSA, named aspect sentiment triplet extraction (ASTE). Particularly, a solver of this task needs to extract triplets (What, How, Why) from the inputs, which show WHAT the targeted aspects are, HOW their sentiment polarities are and WHY they have such polarities (i.e. opinion reasons). For instance, one triplet from “Waiters are very friendly and the pasta is simply average” could be (‘Waiters’, positive, ‘friendly’). We propose a two-stage framework to address this task. The first stage predicts what, how and why in a unified model, and then the second stage pairs up the predicted what (how) and why from the first stage to output triplets. In the experiments, our framework has set a benchmark performance in this novel triplet extraction task. Meanwhile, it outperforms a few strong baselines adapted from state-of-the-art related methods.


Sign in / Sign up

Export Citation Format

Share Document