large computer
Recently Published Documents


TOTAL DOCUMENTS

220
(FIVE YEARS 21)

H-INDEX

17
(FIVE YEARS 2)

2021 ◽  
Vol 6 (01) ◽  
pp. 60-70
Author(s):  
Shesha Kanta Pangeni

Integration of mobile technologies into training and instruction for learning facilitation is important these days. It is because the users of the mobile devices are increasing as enablers of the learning opportunity anywhere all the times. In addition, learners like to get information, learning resources and activities on their palms via mobile devices. In this context, this paper reports lessons from action research about the use of customized android mobile application at a teacher education institution in Nepal. The research started with the purpose of promoting the use of Mobile App for e-learning that contributes to improving access to e-learning resources and instant communication for course activities. Online survey, informal interaction and interview were used to collect data. Activity theory has been influential to analyse the use of Mobile App for the learning facilitation. The research shows that the course facilitators rarely used Mobile App, instead they liked using web browsers in their large computer screen. However, students used the Mobile App and they wanted the updated version with user-friendly interface. Main lesson from the research is that the roles of institution and facilitators are important to create and provide mobile friendly options of learning facilitation where students themselves can explore in the internet, learn, and use available applications and tools required for their learning. Training institutions can introduce mobile application to bring about a change in the ways of training methods and pedagogical practices with technological interventions. Trainers can consider mobile apps for techno-friendly instructional experiences. Also, learners can access mobile apps for training resources and other learning to enhance their knowledge and skill.


2021 ◽  
Vol 14 (12) ◽  
pp. 7659-7672
Author(s):  
Duncan Watson-Parris ◽  
Andrew Williams ◽  
Lucia Deaconu ◽  
Philip Stier

Abstract. Large computer models are ubiquitous in the Earth sciences. These models often have tens or hundreds of tuneable parameters and can take thousands of core hours to run to completion while generating terabytes of output. It is becoming common practice to develop emulators as fast approximations, or surrogates, of these models in order to explore the relationships between these inputs and outputs, understand uncertainties, and generate large ensembles datasets. While the purpose of these surrogates may differ, their development is often very similar. Here we introduce ESEm: an open-source tool providing a general workflow for emulating and validating a wide variety of models and outputs. It includes efficient routines for sampling these emulators for the purpose of uncertainty quantification and model calibration. It is built on well-established, high-performance libraries to ensure robustness, extensibility and scalability. We demonstrate the flexibility of ESEm through three case studies using ESEm to reduce parametric uncertainty in a general circulation model and explore precipitation sensitivity in a cloud-resolving model and scenario uncertainty in the CMIP6 multi-model ensemble.


Author(s):  
A. P. Nosov ◽  
A. A. Akhrem ◽  
V. Z. Rakhmankulov

The paper studies problems of reduction (decomposition) of OLAP-hypercube multidimensional data models. When decomposing large hyper-cubes of multidimensional data into sub-cube components the goal is to increase the computational performance of analytical OLAP systems, which is related to decreasing computational complexity of reduction methods for solving OLAP-data analysis problems with respect to the computational complexity of non-reduction methods, applied to data directly all over the hypercube. The paper formalizes the concepts of reduction and non-reduction methods and gives a definition of the upper bound for the change in the computational complexity of reduction methods in the decomposition of the problem of analyzing multidimensional OLAP-data in comparison with non-reduction methods in the class of exponential degree of computational complexity.The exact values of the upper bound for changing computational complexity are obtained for the hypercube decomposition into two sub-cubes on sets consisting of an even and an odd number of sub-cube structures, and its main properties are given, which are used to determine the decomposition efficiency. A formula for the efficiency of decomposition into two sub-cube structures for reduction of OLAP data analysis problems is obtained, and it is shown that with an increase in the dimension “n” of the lattice specifying the number of sub-cubes in the hypercube data structure, the efficiency of such a decomposition obeys an exponential law with an exponent “n/2”, regardless of the parity “n”. The examples show the possibility to use the values (found) of the upper bound for the change in computational complexity to establish the effectiveness criteria for reduction methods and the expediency of decomposition in specific cases.The paper results can be used in processing and analysis of information arrays of hypercube structures of analytical OLAP systems belonging to the Big-Data or super-large computer systems of multidimensional data.


10.37236/9653 ◽  
2021 ◽  
Vol 28 (4) ◽  
Author(s):  
Ilan Adler ◽  
Jesús A. De Loera ◽  
Steven Klee ◽  
Zhenyang Zhang

Oriented matroids are combinatorial structures that generalize point configurations, vector configurations, hyperplane arrangements, polyhedra, linear programs, and directed graphs. Oriented matroids have played a key  role in combinatorics, computational geometry, and optimization. This paper surveys prior work and presents an update on the search for bounds on the diameter of the cocircuit graph of an oriented matroid. The motivation for our investigations is the complexity of the simplex method and the criss-cross method. We review the diameter problem and show the diameter bounds of general oriented matroids reduce to those of uniform oriented matroids. We give the latest exact bounds for oriented matroids of low rank and low corank, and for all oriented matroids with up to nine elements (this part required a large computer-based proof).  For arbitrary oriented matroids, we present an improvement to a quadratic bound of Finschi. Our discussion highlights an old conjecture that states a linear bound for the diameter is possible. On the positive side, we show the conjecture is true for oriented matroids of low rank and low corank, and, verified with computers, for all oriented matroids with up to nine elements. On the negative side, our computer search showed two natural strengthenings of the main conjecture are false. 


Machines ◽  
2021 ◽  
Vol 9 (10) ◽  
pp. 231
Author(s):  
Luo Fang ◽  
Qiang Liu ◽  
Ding Zhang

In the design and operation scenarios driven by Digital Twins, large computer-aided design (CAD) models of production line equipment can limit the real-time performance and fidelity of the interaction between digital and physical entities. Digital CAD models often consist of combined parts with characteristics of discrete folded corner planes. CAD models simplified to a lower resolution by current mainstream mesh simplification algorithms might suffer from significant feature loss and mesh breakage, and the interfaces between the different parts cannot be well identified and simplified. A lightweight approach for common CAD assembly models of Digital Twins is proposed. Based on quadric error metrics, constraints of discrete folded corner plane characteristics of Digital Twin CAD models are added. The triangular regularity in the neighborhood of the contraction target vertices is used as the penalty function, and edge contraction is performed based on the cost. Finally, a segmentation algorithm is employed to identify and remove the interfaces between the two CAD assembly models. The proposed approach is verified through common stereoscopic warehouse, robot base, and shelf models. In addition, a scenario of a smart phone production line is applied. The experimental results indicate that the geometric error of the simplified mesh is reduced, the frame rate is improved, and the integrity of the geometric features and triangular facets is effectively preserved.


2021 ◽  
Author(s):  
Duncan Watson-Parris ◽  
Andrew Williams ◽  
Lucia Deaconu ◽  
Philip Stier

Abstract. Large computer models are ubiquitous in the earth sciences. These models often have tens or hundreds of tuneable parameters and can take thousands of core-hours to run to completion while generating terabytes of output. It is becoming common practice to develop emulators as fast approximations, or surrogates, of these models in order to explore the relationships between these inputs and outputs, understand uncertainties and generate large ensembles datasets. While the purpose of these surrogates may differ, their development is often very similar. Here we introduce ESEm: an open-source tool providing a general workflow for emulating and validating a wide variety of models and outputs. It includes efficient routines for sampling these emulators for the purpose of uncertainty quantification and model calibration. It is built on well-established, high-performance libraries to ensure robustness, extensibility and scalability. We demonstrate the flexibility of ESEm through three case-studies using ESEm to reduce parametric uncertainty in a general circulation model, explore precipitation sensitivity in a cloud resolving model and scenario uncertainty in the CMIP6 multi-model ensemble.


Energies ◽  
2021 ◽  
Vol 14 (16) ◽  
pp. 4911
Author(s):  
Jian Zhang ◽  
Mingjian Cui ◽  
Yigang He

Distributed generators providing auxiliary service are an important means of guaranteeing the safe and economic operation of a distribution system. In this paper, considering an energy storage system (ESS), switchable capacitor reactor (SCR), step voltage regulator (SVR), and a static VAR compensator (SVC), a two-stage multi-period hybrid integer second-order cone programming (SOCP) robust model with partial DGs providing auxiliary service is developed. If the conic relaxation is not exact, a sequential SOCP is formulated using convex–concave procedure (CCP) and cuts, which can be quickly solved. Moreover, the exact solution of the original problem can be recovered. Furthermore, in view of the shortcomings of the large computer storage capacity and slow computational rate for the column and constraint generation (CCG) method, a method direct iteratively solving the master and sub-problem is proposed. Increases to variables and constraints to solve the master problem are not needed. For the sub-problem, only the model of each single time period needs to be solved. Then, their objective function values are accumulated, and the worst scenarios of each time period are concatenated. As an outcome, a large amount of storage memory is saved and the computational efficiency is greatly enhanced. The capability of the proposed method is validated with three simulation cases.


2021 ◽  
Vol 8 (2) ◽  
pp. 205395172110359
Author(s):  
Emily Denton ◽  
Alex Hanna ◽  
Razvan Amironesei ◽  
Andrew Smart ◽  
Hilary Nicole

In response to growing concerns of bias, discrimination, and unfairness perpetuated by algorithmic systems, the datasets used to train and evaluate machine learning models have come under increased scrutiny. Many of these examinations have focused on the contents of machine learning datasets, finding glaring underrepresentation of minoritized groups. In contrast, relatively little work has been done to examine the norms, values, and assumptions embedded in these datasets. In this work, we conceptualize machine learning datasets as a type of informational infrastructure, and motivate a genealogy as method in examining the histories and modes of constitution at play in their creation. We present a critical history of ImageNet as an exemplar, utilizing critical discourse analysis of major texts around ImageNet’s creation and impact. We find that assumptions around ImageNet and other large computer vision datasets more generally rely on three themes: the aggregation and accumulation of more data, the computational construction of meaning, and making certain types of data labor invisible. By tracing the discourses that surround this influential benchmark, we contribute to the ongoing development of the standards and norms around data development in machine learning and artificial intelligence research.


2021 ◽  
Author(s):  
Abdel H Charty

Harmonics are increasingly becoming a major source of power quality problems in today's commercial power distribution systems. Although they have been present in power distribution systems since the early days of AC power systems, their level has dramatically increased and the effect on power distributions has been more and more noticeable in the last decade. This dramatic increase of harmonic levels is mainly due to the introduction of non linear loads such as personal computers, servers, variable frequency drives and UPS systems. Harmonic problems are especially common in commercial buildings housing large computer rooms where the concentration of nonlinear loads per square foot is very high, and continue to grow higher as the footprint of network and communication equipment becomes smaller. This report will provide a deep look at harmonics in power distribution systems in commercial buildings, their sources and the different ways by which they can affect electrical systems and power quality. Some of the solutions commonly used to deal with the problem of harmonics are reviewed, and a critical analysis of their effectiveness is provided. Computer simulations using Matlab Simulink have been developed to illustrate key points when possible.


2021 ◽  
Author(s):  
Abdel H Charty

Harmonics are increasingly becoming a major source of power quality problems in today's commercial power distribution systems. Although they have been present in power distribution systems since the early days of AC power systems, their level has dramatically increased and the effect on power distributions has been more and more noticeable in the last decade. This dramatic increase of harmonic levels is mainly due to the introduction of non linear loads such as personal computers, servers, variable frequency drives and UPS systems. Harmonic problems are especially common in commercial buildings housing large computer rooms where the concentration of nonlinear loads per square foot is very high, and continue to grow higher as the footprint of network and communication equipment becomes smaller. This report will provide a deep look at harmonics in power distribution systems in commercial buildings, their sources and the different ways by which they can affect electrical systems and power quality. Some of the solutions commonly used to deal with the problem of harmonics are reviewed, and a critical analysis of their effectiveness is provided. Computer simulations using Matlab Simulink have been developed to illustrate key points when possible.


Sign in / Sign up

Export Citation Format

Share Document