A Novel Bi-Modal Wave Analysis Approach to Pipelay Installation Analysis

Author(s):  
Dan Lee ◽  
Piotr Niesluchowski

Abstract The single wave or uni-modal wave analysis approach to define the installation seastates, based on a single wave with varying directions, wave heights and periods, is a standard approach among the installation contractors. However, in many of the offshore projects, e.g. offshore Trinidad, and Senegal and Mauritania, bi-modal seatates or waves are a common occurrence, but they are not considered in the installation analysis due to the complexity of the analysis and the computation time required to capture two waves, i.e. wind-sea and swell concurrently from different directions. A novel bi-modal wave analysis approach is developed to assess the risk to pipelay installation operations due to the impact of bi-modal waves on the installation vessel, characterised by two peak frequencies of varying directions, wave heights and periods. The approach requires the use of clustered data, based on hindcast wave data over a period of time which can be provided by the Metocean Specialist. A combination of statistical evaluation of the clustered data and vessel response screening is used to identify critical clustered pairs for further installation analysis, and to complement the established single wave analysis and the associated installation seastates. An example is provided in this paper to illustrate the benefits of bi-modal waves consideration, and to demonstrate the use of this novel approach in order to ensure any potential risk is captured so that the pipelay installation operations can be carried out in a safe offshore environment.

2019 ◽  
Vol 42 (4) ◽  
pp. 647-654
Author(s):  
Alessandro Paglianti ◽  
Francesco Maluta ◽  
Giuseppina Montante

Salt particles dissolution in slurry stirred tanks provides an ambitious challenge for the application of Electrical Resistance Tomography in the process industry, because the presence of high loadings of inert particles requires a purposely developed post-processing method of the experimental data. For the optimization of the working conditions of the dissolution process, two characteristic times are required: the time for the liquid homogenization in the tank and the time required for the complete dissolution of the salt particles. The former time has been experimentally determined in previous investigations both in stirred tanks working with single-phase and with multiphase mixtures. The latter characteristic time has not been analyzed so far, due to the lack of experimental procedures for distinguishing it from the former. In this work, a novel approach for the simultaneous identification of the two characteristic times is presented. The impact of the new procedure is significant for the production processes, since it offers a tool for identifying when the soluble particle size has an impact on the dissolution dynamics, and when the stirred tank dynamics is influenced by the liquid homogenization only, and therefore a reduction of the particle size does not speed up the process accomplishment.


Energies ◽  
2021 ◽  
Vol 14 (24) ◽  
pp. 8196
Author(s):  
Aleksander Jakubowski ◽  
Leszek Jarzebowicz ◽  
Mikołaj Bartłomiejczyk ◽  
Jacek Skibicki ◽  
Slawomir Judek ◽  
...  

The paper proposes a novel approach to modeling electrified transportation systems. The proposed solution reflects the mechanical dynamics of vehicles as well as the distribution and losses of electric supply. Moreover, energy conversion losses between the mechanical and electrical subsystems and their bilateral influences are included. Such a complete model makes it possible to replicate, e.g., the impact of voltage drops on vehicle acceleration or the necessity of partial disposal of regenerative braking energy due to temporary lack of power transmission capability. The modeling methodology uses a flexible twin data-bus structure, which poses no limitation on the number of vehicles and enables modeling complex traction power supply structures. The proposed solution is suitable for various electrified transportation systems including suburban and urban systems. The modeling methodology is applicable i.a. to Matlab/Simulink, which makes it broadly available and customizable, and provides short computation time. The applicability and accuracy of the method were verified by comparing simulation and measurement results on an exemplary trolleybus system operating in Pilsen, Czech Republic. Simulation of daily operation of an area including four supply sections and maximal simultaneous number of nine vehicles showed a good conformance with the measured data, with the difference in the total consumed energy not exceeding 5%.


2020 ◽  
Author(s):  
Breogán Gómez ◽  
Gonzalo Miguez-Macho

Abstract. Spectral nudging allows forcing a selected part of the spectrum of a model's solution with the equivalent part in a reference dataset, such as an analysis, reanalysis or another model. This constrains the evolution in certain scales, typically the synoptic ones, while allowing the others to evolve freely. In a limited area model (LAM) setting, spectral nudging is commonly used to impose the large-scale circulation in the interior of the domain, so that the high resolution features in the LAM's forecast are consistent with the global circulation patterns. In a previous study developed over a Mid-Latitude domain, we investigated two parameters of spectral nudging that are often overlooked despite having a significant impact on the model solution. First, the cut-off wave number, which is the parameter determining the scales that are nudged and has a critical impact on the spatial structure of the model solution. Second, the spin-up time, which is the time required to balance the nudging force with the model internal climate and roughly indicates the starting point from when the results of the simulation contain useful information. The question remains if our conclusions for Mid-Latitudes are applicable to other areas of the planet. Tropical Latitudes offer an interesting testbed as its atmospheric dynamics has unique characteristics with respect to that further North and yet it is the result of the same underlying physical principles. We study the impact of these two parameters in a domain centred in the Gulf of Mexico, with a particular aim to evaluate their performance related to hurricane modelling. We perform 4-day simulations along 6 monthly periods between 2010 and 2015, testing several spectral nudging configurations. Our results indicate that the optimal cut off wavenumber lies between 1000 Km and 1500 Km depending on the studied variable and that the spin-up time required is at least 72 h to 96 h, which is consistent with our previous work. We evaluate our findings in four hurricane cases, allowing for at least 96 h of spin-up time before the system becomes a tropical storm. Results confirm that the experiments with cut-off wavenumbers near the Rossby Radius of Deformation perform best. We also propose a novel approach in which a different cut-off wavenumber is used for each variable. Our tests in the hurricane cases show that the latter set up is able to outperform all of the other spectral nudging experiments when compared to observations.


Methodology ◽  
2007 ◽  
Vol 3 (1) ◽  
pp. 14-23 ◽  
Author(s):  
Juan Ramon Barrada ◽  
Julio Olea ◽  
Vicente Ponsoda

Abstract. The Sympson-Hetter (1985) method provides a means of controlling maximum exposure rate of items in Computerized Adaptive Testing. Through a series of simulations, control parameters are set that mark the probability of administration of an item on being selected. This method presents two main problems: it requires a long computation time for calculating the parameters and the maximum exposure rate is slightly above the fixed limit. Van der Linden (2003) presented two alternatives which appear to solve both of the problems. The impact of these methods in the measurement accuracy has not been tested yet. We show how these methods over-restrict the exposure of some highly discriminating items and, thus, the accuracy is decreased. It also shown that, when the desired maximum exposure rate is near the minimum possible value, these methods offer an empirical maximum exposure rate clearly above the goal. A new method, based on the initial estimation of the probability of administration and the probability of selection of the items with the restricted method ( Revuelta & Ponsoda, 1998 ), is presented in this paper. It can be used with the Sympson-Hetter method and with the two van der Linden's methods. This option, when used with Sympson-Hetter, speeds the convergence of the control parameters without decreasing the accuracy.


2021 ◽  
Author(s):  
Alexander Subbotin ◽  
Samin Aref

AbstractWe study international mobility in academia, with a focus on the migration of published researchers to and from Russia. Using an exhaustive set of over 2.4 million Scopus publications, we analyze all researchers who have published with a Russian affiliation address in Scopus-indexed sources in 1996–2020. The migration of researchers is observed through the changes in their affiliation addresses, which altered their mode countries of affiliation across different years. While only 5.2% of these researchers were internationally mobile, they accounted for a substantial proportion of citations. Our estimates of net migration rates indicate that while Russia was a donor country in the late 1990s and early 2000s, it has experienced a relatively balanced circulation of researchers in more recent years. These findings suggest that the current trends in scholarly migration in Russia could be better framed as brain circulation, rather than as brain drain. Overall, researchers emigrating from Russia outnumbered and outperformed researchers immigrating to Russia. Our analysis on the subject categories of publication venues shows that in the past 25 years, Russia has, overall, suffered a net loss in most disciplines, and most notably in the five disciplines of neuroscience, decision sciences, mathematics, biochemistry, and pharmacology. We demonstrate the robustness of our main findings under random exclusion of data and changes in numeric parameters. Our substantive results shed light on new aspects of international mobility in academia, and on the impact of this mobility on a national science system, which have direct implications for policy development. Methodologically, our novel approach to handling big data can be adopted as a framework of analysis for studying scholarly migration in other countries.


Author(s):  
J. R. Barnes ◽  
C. A. Haswell

AbstractAriel’s ambitious goal to survey a quarter of known exoplanets will transform our knowledge of planetary atmospheres. Masses measured directly with the radial velocity technique are essential for well determined planetary bulk properties. Radial velocity masses will provide important checks of masses derived from atmospheric fits or alternatively can be treated as a fixed input parameter to reduce possible degeneracies in atmospheric retrievals. We quantify the impact of stellar activity on planet mass recovery for the Ariel mission sample using Sun-like spot models scaled for active stars combined with other noise sources. Planets with necessarily well-determined ephemerides will be selected for characterisation with Ariel. With this prior requirement, we simulate the derived planet mass precision as a function of the number of observations for a prospective sample of Ariel targets. We find that quadrature sampling can significantly reduce the time commitment required for follow-up RVs, and is most effective when the planetary RV signature is larger than the RV noise. For a typical radial velocity instrument operating on a 4 m class telescope and achieving 1 m s−1 precision, between ~17% and ~ 37% of the time commitment is spent on the 7% of planets with mass Mp < 10 M⊕. In many low activity cases, the time required is limited by asteroseismic and photon noise. For low mass or faint systems, we can recover masses with the same precision up to ~3 times more quickly with an instrumental precision of ~10 cm s−1.


Coatings ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 758
Author(s):  
Cibi Pranav ◽  
Minh-Tan Do ◽  
Yi-Chang Tsai

High Friction Surfaces (HFS) are applied to increase friction capacity on critical roadway sections, such as horizontal curves. HFS friction deterioration on these sections is a safety concern. This study deals with characterization of the aggregate loss, one of the main failure mechanisms of HFS, using texture parameters to study its relationship with friction. Tests are conducted on selected HFS spots with different aggregate loss severity levels at the National Center for Asphalt Technology (NCAT) Test Track. Friction tests are performed using a Dynamic Friction Tester (DFT). The surface texture is measured by means of a high-resolution 3D pavement scanning system (0.025 mm vertical resolution). Texture data are processed and analyzed by means of the MountainsMap software. The correlations between the DFT friction coefficient and the texture parameters confirm the impact of change in aggregates’ characteristics (including height, shape, and material volume) on friction. A novel approach to detect the HFS friction coefficient transition based on aggregate loss, inspired by previous works on the tribology of coatings, is proposed. Using the proposed approach, preliminary outcomes show it is possible to observe the rapid friction coefficient transition, similar to observations at NCAT. Perspectives for future research are presented and discussed.


2021 ◽  
Vol 13 (5) ◽  
pp. 874
Author(s):  
Yu Chen ◽  
Mohamed Ahmed ◽  
Natthachet Tangdamrongsub ◽  
Dorina Murgulet

The Nile River stretches from south to north throughout the Nile River Basin (NRB) in Northeast Africa. Ethiopia, where the Blue Nile originates, has begun the construction of the Grand Ethiopian Renaissance Dam (GERD), which will be used to generate electricity. However, the impact of the GERD on land deformation caused by significant water relocation has not been rigorously considered in the scientific research. In this study, we develop a novel approach for predicting large-scale land deformation induced by the construction of the GERD reservoir. We also investigate the limitations of using the Gravity Recovery and Climate Experiment Follow On (GRACE-FO) mission to detect GERD-induced land deformation. We simulated three land deformation scenarios related to filling the expected reservoir volume, 70 km3, using 5-, 10-, and 15-year filling scenarios. The results indicated: (i) trends in downward vertical displacement estimated at −17.79 ± 0.02, −8.90 ± 0.09, and −5.94 ± 0.05 mm/year, for the 5-, 10-, and 15-year filling scenarios, respectively; (ii) the western (eastern) parts of the GERD reservoir are estimated to move toward the reservoir’s center by +0.98 ± 0.01 (−0.98 ± 0.01), +0.48 ± 0.00 (−0.48 ± 0.00), and +0.33 ± 0.00 (−0.33 ± 0.00) mm/year, under the 5-, 10- and 15-year filling strategies, respectively; (iii) the northern part of the GERD reservoir is moving southward by +1.28 ± 0.02, +0.64 ± 0.01, and +0.43 ± 0.00 mm/year, while the southern part is moving northward by −3.75 ± 0.04, −1.87 ± 0.02, and −1.25 ± 0.01 mm/year, during the three examined scenarios, respectively; and (iv) the GRACE-FO mission can only detect 15% of the large-scale land deformation produced by the GERD reservoir. Methods and results demonstrated in this study provide insights into possible impacts of reservoir impoundment on land surface deformation, which can be adopted into the GERD project or similar future dam construction plans.


2014 ◽  
Vol 660 ◽  
pp. 971-975 ◽  
Author(s):  
Mohd Norzaim bin Che Ani ◽  
Siti Aisyah Binti Abdul Hamid

Time study is the process of observation which concerned with the determination of the amount of time required to perform a unit of work involves of internal, external and machine time elements. Originally, time study was first starting to be used in Europe since 1760s in manufacturing fields. It is the flexible technique in lean manufacturing and suitable for a wide range of situations. Time study approach that enable of reducing or minimizing ‘non-value added activities’ in the process cycle time which contribute to bottleneck time. The impact on improving process cycle time for organization that it was increasing the productivity and reduce cost. This project paper focusing on time study at selected processes with bottleneck time and identify the possible root cause which was contribute to high time required to perform a unit of work.


Sign in / Sign up

Export Citation Format

Share Document