Introduction

2020 ◽  
pp. 1-5
Author(s):  
Sandip Tiwari

Semiconductors, as crystalline, polycrystalline or amorphous inorganic solids, as ordered or disordered organic solids or even in glassy and liquid forms, form a large set of materials useful in active and passive devices. The control of their properties arising in an interaction of particles—atoms, electrons, photons, their elementary one- and many-body excitations, transport and the exchange between different energy forms—has been a fruitful human endeavor since the birth of the transistor, where they found their first large-scale use. Integrated electronics, through its social and commercial informational ubiquity; optoelectronics, through lasers and photovoltaics; and thermoelectronics and magnetoelectronics, with their use in energy transformation and signal detection, are but a few of these gainful uses. Nanoscale, within this milieu, opens up a variety of perturbative and significantly more substantial and sensitive effects. Some are very useful, and some can be quite a bother....

2018 ◽  
Author(s):  
Pavel Pokhilko ◽  
Evgeny Epifanovsky ◽  
Anna I. Krylov

Using single precision floating point representation reduces the size of data and computation time by a factor of two relative to double precision conventionally used in electronic structure programs. For large-scale calculations, such as those encountered in many-body theories, reduced memory footprint alleviates memory and input/output bottlenecks. Reduced size of data can lead to additional gains due to improved parallel performance on CPUs and various accelerators. However, using single precision can potentially reduce the accuracy of computed observables. Here we report an implementation of coupled-cluster and equation-of-motion coupled-cluster methods with single and double excitations in single precision. We consider both standard implementation and one using Cholesky decomposition or resolution-of-the-identity of electron-repulsion integrals. Numerical tests illustrate that when single precision is used in correlated calculations, the loss of accuracy is insignificant and pure single-precision implementation can be used for computing energies, analytic gradients, excited states, and molecular properties. In addition to pure single-precision calculations, our implementation allows one to follow a single-precision calculation by clean-up iterations, fully recovering double-precision results while retaining significant savings.


2019 ◽  
Author(s):  
Ryther Anderson ◽  
Achay Biong ◽  
Diego Gómez-Gualdrón

<div>Tailoring the structure and chemistry of metal-organic frameworks (MOFs) enables the manipulation of their adsorption properties to suit specific energy and environmental applications. As there are millions of possible MOFs (with tens of thousands already synthesized), molecular simulation, such as grand canonical Monte Carlo (GCMC), has frequently been used to rapidly evaluate the adsorption performance of a large set of MOFs. This allows subsequent experiments to focus only on a small subset of the most promising MOFs. In many instances, however, even molecular simulation becomes prohibitively time consuming, underscoring the need for alternative screening methods, such as machine learning, to precede molecular simulation efforts. In this study, as a proof of concept, we trained a neural network as the first example of a machine learning model capable of predicting full adsorption isotherms of different molecules not included in the training of the model. To achieve this, we trained our neural network only on alchemical species, represented only by their geometry and force field parameters, and used this neural network to predict the loadings of real adsorbates. We focused on predicting room temperature adsorption of small (one- and two-atom) molecules relevant to chemical separations. Namely, argon, krypton, xenon, methane, ethane, and nitrogen. However, we also observed surprisingly promising predictions for more complex molecules, whose properties are outside the range spanned by the alchemical adsorbates. Prediction accuracies suitable for large-scale screening were achieved using simple MOF (e.g. geometric properties and chemical moieties), and adsorbate (e.g. forcefield parameters and geometry) descriptors. Our results illustrate a new philosophy of training that opens the path towards development of machine learning models that can predict the adsorption loading of any new adsorbate at any new operating conditions in any new MOF.</div>


Organics ◽  
2021 ◽  
Vol 2 (3) ◽  
pp. 142-160
Author(s):  
Keith Smith ◽  
Gamal A. El-Hiti

para-Selective processes for the chlorination of phenols using sulphuryl chloride in the presence of various sulphur-containing catalysts have been successfully developed. Several chlorinated phenols, especially those derived by para-chlorination of phenol, ortho-cresol, meta-cresol, and meta-xylenol, are of significant commercial importance, but chlorination reactions of such phenols are not always as regioselective as would be desirable. We, therefore, undertook the challenge of developing suitable catalysts that might promote greater regioselectivity under conditions that might still be applicable for the commercial manufacture of products on a large scale. In this review, we chart our progress in this endeavour from early studies involving inorganic solids as potential catalysts, through the use of simple dialkyl sulphides, which were effective but unsuitable for commercial application, and through a variety of other types of sulphur compounds, to the eventual identification of particular poly(alkylene sulphide)s as very useful catalysts. When used in conjunction with a Lewis acid such as aluminium or ferric chloride as an activator, and with sulphuryl chloride as the reagent, quantitative yields of chlorophenols can be obtained with very high regioselectivity in the presence of tiny amounts of the polymeric sulphides, usually in solvent-free conditions (unless the phenol starting material is solid at temperatures even above about 50 °C). Notably, poly(alkylene sulphide)s containing longer spacer groups are particularly para-selective in the chlorination of m-cresol and m-xylenol, while, ones with shorter spacers are particularly para-selective in the chlorination of phenol, 2-chlorophenol, and o-cresol. Such chlorination processes result in some of the highest para/ortho ratios reported for the chlorination of phenols.


Energies ◽  
2021 ◽  
Vol 14 (4) ◽  
pp. 931
Author(s):  
Karolina Mucha-Kuś ◽  
Maciej Sołtysik ◽  
Krzysztof Zamasz ◽  
Katarzyna Szczepańska-Woszczyna

The decentralization of the large-scale energy sector, its replacement with pro-ecological, dispersed production sources and building a citizen dimension of the energy sector are the directional objectives of the energy transformation in the European Union. Building energy self-sufficiency at a local level is possible, based on the so-called Energy Communities, which include energy clusters and energy cooperatives. Several dozen pilot projects for energy clusters have been implemented in Poland, while energy cooperatives, despite being legally sanctioned and potentially a simpler formula of operation, have not functioned in practice. This article presents the coopetitive nature of Energy Communities. The authors analysed the principles and benefits of creating Energy Communities from a regulatory and practical side. An important element of the analysis is to indicate the managerial, coopetitive nature of the strategies implemented within the Energy Communities. Their members, while operating in a competitive environment, simultaneously cooperate to achieve common benefits. On the basis of the actual data of recipients and producers, the results of simulations of benefits in the economic dimension will be presented, proving the thesis of the legitimacy of creating coopetitive structures of Energy Communities.


2012 ◽  
Vol 209-211 ◽  
pp. 252-255
Author(s):  
Li Guo ◽  
Hai Ying Zheng ◽  
Yong Hong Wang ◽  
Bin Zhang

Data matching technology is a key technology for spatial data integration and fusion. This paper represents a solution to the complex polygon area, defines the area overlapped rate in the aspect of geometric measure, presents the data matching idea based on area overlapped rate .Then, this paper discusses and realizes the data matching relation of area elements including one to one , many to one and many to many. At last, region targets are set as the study object, large scale data are taken for example. We draw the conclusion: this algorithm is efficient.


2021 ◽  
Author(s):  
Béla Kovács ◽  
Márton Pál ◽  
Fanni Vörös

&lt;p&gt;The use of aerial photography in topography has started in the first decades of the 20&lt;sup&gt;th&lt;/sup&gt; century. Remote sensed data have become indispensable for cartographers and GIS staff when doing large-scale mapping: especially topographic, orienteering and thematic maps. The use of UAVs (unmanned aerial vehicles) for this purpose has also become widespread for some years. Various drones and sensors (RGB, multispectral and hyperspectral) with many specifications are used to capture and process the physical properties of an examined area. In parallel with the development of the hardware, new software solutions are emerging to visualize and analyse photogrammetric material: a large set of algorithms with different approaches are available for image processing.&lt;/p&gt;&lt;p&gt;Our study focuses on the large-scale topographic mapping of vegetation and land cover. Most traditional analogue and digital maps use these layers either for background or highlighted thematic purposes. We propose to use the theory of OBIA &amp;#8211; Object-based Image Analysis to differentiate cover types. This method involves pixels to be grouped into larger polygon units based on either spectral or other variables (e.g. elevation, aspect, curvature in case of DEMs). The neighbours of initial seed points are examined whether they should be added to the region according to the similarity of their attributes. Using OBIA, different land cover types (trees, grass, soils, bare rock surfaces) can be distinguished either with supervised or unsupervised classification &amp;#8211; depending on the purposes of the analyst. Our base data were high-resolution RGB and multispectral images (with 5 bands).&lt;/p&gt;&lt;p&gt;Following this methodology, not only elevation data (e.g. shaded relief or vector contour lines) can be derived from UAV imagery but vector land cover data are available for cartographers and GIS analysts. As the number of distinct land cover groups is free to choose, even quite complex thematic layers can be produced. These layers can serve as subjects of further analyses or for cartographic visualization.&lt;/p&gt;&lt;p&gt;&amp;#160;&lt;/p&gt;&lt;p&gt;BK is supported by Application Domain Specific Highly Reliable IT Solutions&amp;#8221; project &amp;#160;has been implemented with the support provided from the National Research, Development and Innovation Fund of Hungary, financed under the Thematic Excellence Programme TKP2020-NKA-06 (National Challenges Subprogramme) funding scheme.&lt;/p&gt;&lt;p&gt;MP and FV are supported by EFOP-3.6.3-VEKOP-16-2017-00001: Talent Management in Autonomous Vehicle Control Technologies &amp;#8211; The Project is financed by the Hungarian Government and co-financed by the European Social Fund.&lt;/p&gt;


Author(s):  
Martin Schreiber ◽  
Pedro S Peixoto ◽  
Terry Haut ◽  
Beth Wingate

This paper presents, discusses and analyses a massively parallel-in-time solver for linear oscillatory partial differential equations, which is a key numerical component for evolving weather, ocean, climate and seismic models. The time parallelization in this solver allows us to significantly exceed the computing resources used by parallelization-in-space methods and results in a correspondingly significantly reduced wall-clock time. One of the major difficulties of achieving Exascale performance for weather prediction is that the strong scaling limit – the parallel performance for a fixed problem size with an increasing number of processors – saturates. A main avenue to circumvent this problem is to introduce new numerical techniques that take advantage of time parallelism. In this paper, we use a time-parallel approximation that retains the frequency information of oscillatory problems. This approximation is based on (a) reformulating the original problem into a large set of independent terms and (b) solving each of these terms independently of each other which can now be accomplished on a large number of high-performance computing resources. Our results are conducted on up to 3586 cores for problem sizes with the parallelization-in-space scalability limited already on a single node. We gain significant reductions in the time-to-solution of 118.3× for spectral methods and 1503.0× for finite-difference methods with the parallelization-in-time approach. A developed and calibrated performance model gives the scalability limitations a priori for this new approach and allows us to extrapolate the performance of the method towards large-scale systems. This work has the potential to contribute as a basic building block of parallelization-in-time approaches, with possible major implications in applied areas modelling oscillatory dominated problems.


1997 ◽  
Vol 50 (3) ◽  
pp. 528-559 ◽  
Author(s):  
Catriona M. Morrison ◽  
Tameron D. Chappell ◽  
Andrew W. Ellis

Studies of lexical processing have relied heavily on adult ratings of word learning age or age of acquisition, which have been shown to be strongly predictive of processing speed. This study reports a set of objective norms derived in a large-scale study of British children's naming of 297 pictured objects (including 232 from the Snodgrass & Vanderwart, 1980, set). In addition, data were obtained on measures of rated age of acquisition, rated frequency, imageability, object familiarity, picture-name agreement, and name agreement. We discuss the relationship between the objective measure and adult ratings of word learning age. Objective measures should be used when available, but where not, our data suggest that adult ratings provide a reliable and valid measure of real word learning age.


2019 ◽  
Author(s):  
K. Vyse ◽  
L. Faivre ◽  
M. Romich ◽  
M. Pagter ◽  
D. Schubert ◽  
...  

AbstractChromatin regulation ensures stable repression of stress-inducible genes under non-stress conditions and transcriptional activation and memory of such an activation of those genes when plants are exposed to stress. However, there is only limited knowledge on how chromatin genes are regulated at the transcriptional and post-transcriptional level upon stress exposure and relief from stress. We have therefore set-up a RT-qPCR-based platform for high-throughput transcriptional profiling of a large set of chromatin genes. We find that the expression of a large fraction of these genes is regulated by cold. In addition, we reveal an induction of several DNA and histone demethylase genes and certain histone variants after plants have been shifted back to ambient temperature (deacclimation), suggesting a role in the memory of cold acclimation. We also re-analyse large scale transcriptomic datasets for transcriptional regulation and alternative splicing (AS) of chromatin genes, uncovering an unexpected level of regulation of these genes, particularly at the splicing level. This includes several vernalization regulating genes whose AS results in cold-regulated protein diversity. Overall, we provide a profiling platform for the analysis of chromatin regulatory genes and integrative analyses of their regulation, suggesting a dynamic regulation of key chromatin genes in response to low temperature stress.


2018 ◽  
Author(s):  
Yang Xu ◽  
Barbara Claire Malt ◽  
Mahesh Srinivasan

One way that languages are able to communicate a potentially infinite set of ideas through a finite lexicon is by compressing emerging meanings into words, such that over time, individual words come to express multiple, related senses of meaning. We propose that overarching communicative and cognitive pressures have created systematic directionality in how new metaphorical senses have developed from existing word senses over the history of English. Given a large set of pairs of semantic domains, we used computational models to test which domains have been more commonly the starting points (source domains) and which the ending points (target domains) of metaphorical mappings over the past millennium. We found that a compact set of variables, including externality, embodiment, and valence, explain directionality in the majority of about 5000 metaphorical mappings recorded over the past 1100 years. These results provide the first large-scale historical evidence that metaphorical mapping is systematic, and driven by measurable communicative and cognitive principles.


Sign in / Sign up

Export Citation Format

Share Document