scholarly journals Turning a Liability into an Asset

2019 ◽  
Vol 2 (1) ◽  
pp. 5-13
Author(s):  
James Guild

President Jokowi has promised to add 35 GW of power to the national grid, while the Ministry of Energy and Mineral Resources wants to source 23% of its power from renewable sources by 2025. It will be difficult to reconcile these two goals as the majority of Indonesia’s 35 GW is expected to come from high-capacity coal and gas-fired plants on Java and Sumatra. This runs the risk of both undershooting the renewables goal and neglecting the more remote provinces in eastern Indonesia that rely mainly on imported diesel fuel. With a shrewd policy aproach, this could pose an opportunity to begin developing small-scale renewable power sources – such as solar, wind, and biomass gasification – in more remote parts of Indonesia where natural resources are plentiful and large-scale fossil fuel plants are impractical. This would allow PLN to both boost the share of renewables in the energy mix and acquire experience running flexible micro-grids capable of managing diverse and decentralized energy sources. This would put Indonesia ahead of the curve, as efficient grids that can draw power from a wide range of sources will likely play a big role in the future of energy policy. If PLN continues to focus narrowly on high-capacity gas and coal plants, it will risk getting locked into an inflexible, high-carbon structure ill-suited for the needs of the 21st century. The limits of such a model are already showing in the United States. Keywords: Infrastructure, energy policy, renewables, smart grid, PLN

2021 ◽  
Vol 13 (1) ◽  
Author(s):  
Md Al Mahadi Hasan ◽  
Yuanhao Wang ◽  
Chris R. Bowen ◽  
Ya Yang

AbstractThe development of a nation is deeply related to its energy consumption. 2D nanomaterials have become a spotlight for energy harvesting applications from the small-scale of low-power electronics to a large-scale for industry-level applications, such as self-powered sensor devices, environmental monitoring, and large-scale power generation. Scientists from around the world are working to utilize their engrossing properties to overcome the challenges in material selection and fabrication technologies for compact energy scavenging devices to replace batteries and traditional power sources. In this review, the variety of techniques for scavenging energies from sustainable sources such as solar, air, waste heat, and surrounding mechanical forces are discussed that exploit the fascinating properties of 2D nanomaterials. In addition, practical applications of these fabricated power generating devices and their performance as an alternative to conventional power supplies are discussed with the future pertinence to solve the energy problems in various fields and applications.


1973 ◽  
Vol 7 (1) ◽  
pp. 1-15
Author(s):  
H. G. Nicholas

Elections satisfy both the practical and the theoretical requirements of classical democratic theory if they answer one question only: Who shall rule? Judged by this test the American elections of 7 November 1972 returned as clear and unequivocal an answer as the United States Constitution permits – crystal-clear as to individuals, equivocal as to parties and political forces. But the student of politics and society cannot resist treating elections as data-gathering devices on a wide range of other questions, on the state of the public mind, on the relative potency of pressure groups, on the internal health of the political parties, and, of course, on the shape of things to come. In this ancillary role American elections, despite the generous wealth of statistical material which they throw up – so much more detailed and categorized (though often less precise) than our own – Suffer in most years from one severe limitation, a limitation which in 1972 was particularly conspicuous; they do not engage the interest of more than a moderate percentage of the American citizenry. In 1972 that percentage was as low as 55 per cent, i.e. out of an estimated eligible population of 139,642,000 only 77,000,000 went to the polls. Since this circumscribes the conclusions which can be drawn from the results themselves, as well as constituting a phenomenon of considerable intrinsic interest, it seems worthwhile to begin any examination of the 1972 elections by an analysis not of the votes counted but of those which were never cast.


2000 ◽  
Vol 663 ◽  
Author(s):  
J. Samper ◽  
R. Juncosa ◽  
V. Navarro ◽  
J. Delgado ◽  
L. Montenegro ◽  
...  

ABSTRACTFEBEX (Full-scale Engineered Barrier EXperiment) is a demonstration and research project dealing with the bentonite engineered barrier designed for sealing and containment of waste in a high level radioactive waste repository (HLWR). It includes two main experiments: an situ full-scale test performed at Grimsel (GTS) and a mock-up test operating since February 1997 at CIEMAT facilities in Madrid (Spain) [1,2,3]. One of the objectives of FEBEX is the development and testing of conceptual and numerical models for the thermal, hydrodynamic, and geochemical (THG) processes expected to take place in engineered clay barriers. A significant improvement in coupled THG modeling of the clay barrier has been achieved both in terms of a better understanding of THG processes and more sophisticated THG computer codes. The ability of these models to reproduce the observed THG patterns in a wide range of THG conditions enhances the confidence in their prediction capabilities. Numerical THG models of heating and hydration experiments performed on small-scale lab cells provide excellent results for temperatures, water inflow and final water content in the cells [3]. Calculated concentrations at the end of the experiments reproduce most of the patterns of measured data. In general, the fit of concentrations of dissolved species is better than that of exchanged cations. These models were later used to simulate the evolution of the large-scale experiments (in situ and mock-up). Some thermo-hydrodynamic hypotheses and bentonite parameters were slightly revised during TH calibration of the mock-up test. The results of the reference model reproduce simultaneously the observed water inflows and bentonite temperatures and relative humidities. Although the model is highly sensitive to one-at-a-time variations in model parameters, the possibility of parameter combinations leading to similar fits cannot be precluded. The TH model of the “in situ” test is based on the same bentonite TH parameters and assumptions as for the “mock-up” test. Granite parameters were slightly modified during the calibration process in order to reproduce the observed thermal and hydrodynamic evolution. The reference model captures properly relative humidities and temperatures in the bentonite [3]. It also reproduces the observed spatial distribution of water pressures and temperatures in the granite. Once calibrated the TH aspects of the model, predictions of the THG evolution of both tests were performed. Data from the dismantling of the in situ test, which is planned for the summer of 2001, will provide a unique opportunity to test and validate current THG models of the EBS.


2018 ◽  
Vol 856 ◽  
pp. 135-168 ◽  
Author(s):  
S. T. Salesky ◽  
W. Anderson

A number of recent studies have demonstrated the existence of so-called large- and very-large-scale motions (LSM, VLSM) that occur in the logarithmic region of inertia-dominated wall-bounded turbulent flows. These regions exhibit significant streamwise coherence, and have been shown to modulate the amplitude and frequency of small-scale inner-layer fluctuations in smooth-wall turbulent boundary layers. In contrast, the extent to which analogous modulation occurs in inertia-dominated flows subjected to convective thermal stratification (low Richardson number) and Coriolis forcing (low Rossby number), has not been considered. And yet, these parameter values encompass a wide range of important environmental flows. In this article, we present evidence of amplitude modulation (AM) phenomena in the unstably stratified (i.e. convective) atmospheric boundary layer, and link changes in AM to changes in the topology of coherent structures with increasing instability. We perform a suite of large eddy simulations spanning weakly ($-z_{i}/L=3.1$) to highly convective ($-z_{i}/L=1082$) conditions (where$-z_{i}/L$is the bulk stability parameter formed from the boundary-layer depth$z_{i}$and the Obukhov length $L$) to investigate how AM is affected by buoyancy. Results demonstrate that as unstable stratification increases, the inclination angle of surface layer structures (as determined from the two-point correlation of streamwise velocity) increases from$\unicode[STIX]{x1D6FE}\approx 15^{\circ }$for weakly convective conditions to nearly vertical for highly convective conditions. As$-z_{i}/L$increases, LSMs in the streamwise velocity field transition from long, linear updrafts (or horizontal convective rolls) to open cellular patterns, analogous to turbulent Rayleigh–Bénard convection. These changes in the instantaneous velocity field are accompanied by a shift in the outer peak in the streamwise and vertical velocity spectra to smaller dimensionless wavelengths until the energy is concentrated at a single peak. The decoupling procedure proposed by Mathiset al.(J. Fluid Mech., vol. 628, 2009a, pp. 311–337) is used to investigate the extent to which amplitude modulation of small-scale turbulence occurs due to large-scale streamwise and vertical velocity fluctuations. As the spatial attributes of flow structures change from streamwise to vertically dominated, modulation by the large-scale streamwise velocity decreases monotonically. However, the modulating influence of the large-scale vertical velocity remains significant across the stability range considered. We report, finally, that amplitude modulation correlations are insensitive to the computational mesh resolution for flows forced by shear, buoyancy and Coriolis accelerations.


2020 ◽  
Author(s):  
Aeriel D Belk ◽  
Toni Duarte ◽  
Casey Quinn ◽  
David A. Coil ◽  
Keith E. Belk ◽  
...  

Abstract Background. The United States’ large-scale poultry meat industry is energy and water intensive, and opportunities may exist to improve sustainability during the broiler chilling process. After harvest, the internal temperature of the chicken is rapidly cooled to inhibit bacterial growth that would otherwise compromise the safety of the product. This step is accomplished most commonly by water immersion chilling in the United States, while air chilling methods dominate other global markets. A comprehensive understanding of the differences between these chilling methods is lacking. Therefore, we assessed the meat quality, shelf-life, microbial ecology, and technoeconomic impacts of chilling methods on chicken broilers in a university meat laboratory setting. Results. We discovered that air-chilling (AC) methods resulted in superior chicken odor and shelf-life, especially prior to 14 days of dark storage. Moreover, we demonstrated that AC resulted in a more diverse microbiome that we hypothesize may delay the dominance of the spoilage organism Pseudomonas. Finally, a technoeconomic analysis highlighted potential economic advantages to AC when compared to water-chilling (WC) in facility locations where water costs are a more significant factor than energy costs. Conclusions. In this pilot study, AC chilling methods resulted in a superior product compared to WC methods and may have economic advantages in regions of the U.S. where water is expensive. As a next step, a similar experiment should be done in an industrial setting to confirm these results generated in a small-scale university lab facility.


2015 ◽  
Vol 54 (4) ◽  
pp. 83
Author(s):  
Paul MacLennan

In the winter of 2015, as this review is being written, the price of gasoline is plummeting in the United States and what this will mean for the individual, community, and country for the immediate future but also in years to come is unknown. There are a wide range of implications in politics, economics, and international relations as well as effects on what the individual pays for everyday groceries. It is therefore important that libraries provide their communities with the resources that include information and discussion on how energy and its monetary value interact with society.


2020 ◽  
Author(s):  
Yuan Yuan ◽  
Lei Lin

Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data is scarce. To address this problem, we propose a novel self-supervised pre-training scheme to initialize a Transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pre-training is completed, the pre-trained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed method, leading to a classification accuracy increment up to 1.91% to 6.69%. <div><b>This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.</b></div>


Oceanography ◽  
2021 ◽  
Vol 34 (1) ◽  
pp. 58-75
Author(s):  
Michel Boufadel ◽  
◽  
Annalisa Bracco ◽  
Eric Chassignet ◽  
Shuyi Chen ◽  
...  

Physical transport processes such as the circulation and mixing of waters largely determine the spatial distribution of materials in the ocean. They also establish the physical environment within which biogeochemical and other processes transform materials, including naturally occurring nutrients and human-made contaminants that may sustain or harm the region’s living resources. Thus, understanding and modeling the transport and distribution of materials provides a crucial substrate for determining the effects of biological, geological, and chemical processes. The wide range of scales in which these physical processes operate includes microscale droplets and bubbles; small-scale turbulence in buoyant plumes and the near-surface “mixed” layer; submesoscale fronts, convergent and divergent flows, and small eddies; larger mesoscale quasi-geostrophic eddies; and the overall large-scale circulation of the Gulf of Mexico and its interaction with the Atlantic Ocean and the Caribbean Sea; along with air-sea interaction on longer timescales. The circulation and mixing processes that operate near the Gulf of Mexico coasts, where most human activities occur, are strongly affected by wind- and river-induced currents and are further modified by the area’s complex topography. Gulf of Mexico physical processes are also characterized by strong linkages between coastal/shelf and deeper offshore waters that determine connectivity to the basin’s interior. This physical connectivity influences the transport of materials among different coastal areas within the Gulf of Mexico and can extend to adjacent basins. Major advances enabled by the Gulf of Mexico Research Initiative in the observation, understanding, and modeling of all of these aspects of the Gulf’s physical environment are summarized in this article, and key priorities for future work are also identified.


2020 ◽  
Author(s):  
Congmei Jiang ◽  
Yongfang Mao ◽  
Yi Chai ◽  
Mingbiao Yu

<p>With the increasing penetration of renewable resources such as wind and solar, the operation and planning of power systems, especially in terms of large-scale integration, are faced with great risks due to the inherent stochasticity of natural resources. Although this uncertainty can be anticipated, the timing, magnitude, and duration of fluctuations cannot be predicted accurately. In addition, the outputs of renewable power sources are correlated in space and time, and this brings further challenges for predicting the characteristics of their future behavior. To address these issues, this paper describes an unsupervised method for renewable scenario forecasts that considers spatiotemporal correlations based on generative adversarial networks (GANs), which have been shown to generate high-quality samples. We first utilized an improved GAN to learn unknown data distributions and model the dynamic processes of renewable resources. We then generated a large number of forecasted scenarios using stochastic constrained optimization. For validation, we used power-generation data from the National Renewable Energy Laboratory wind and solar integration datasets. The experimental results validated the effectiveness of our proposed method and indicated that it has significant potential in renewable scenario analysis.</p>


2016 ◽  
Vol 144 (4) ◽  
pp. 1407-1421 ◽  
Author(s):  
Michael L. Waite

Abstract Many high-resolution atmospheric models can reproduce the qualitative shape of the atmospheric kinetic energy spectrum, which has a power-law slope of −3 at large horizontal scales that shallows to approximately −5/3 in the mesoscale. This paper investigates the possible dependence of model energy spectra on the vertical grid resolution. Idealized simulations forced by relaxation to a baroclinically unstable jet are performed for a wide range of vertical grid spacings Δz. Energy spectra are converged for Δz 200 m but are very sensitive to resolution with 500 m ≤ Δz ≤ 2 km. The nature of this sensitivity depends on the vertical mixing scheme. With no vertical mixing or with weak, stability-dependent mixing, the mesoscale spectra are artificially amplified by low resolution: they are shallower and extend to larger scales than in the converged simulations. By contrast, vertical hyperviscosity with fixed grid-scale damping rate has the opposite effect: underresolved spectra are spuriously steepened. High-resolution spectra are converged except for the stability-dependent mixing case, which are damped by excessive mixing due to enhanced shear over a wide range of horizontal scales. It is shown that converged spectra require resolution of all vertical scales associated with the resolved horizontal structures: these include quasigeostrophic scales for large-scale motions with small Rossby number and the buoyancy scale for small-scale motions at large Rossby number. It is speculated that some model energy spectra may be contaminated by low vertical resolution, and it is recommended that vertical-resolution sensitivity tests always be performed.


Sign in / Sign up

Export Citation Format

Share Document