primary advantage
Recently Published Documents


TOTAL DOCUMENTS

145
(FIVE YEARS 11)

H-INDEX

14
(FIVE YEARS 0)

2021 ◽  
Vol 154 (A2) ◽  
Author(s):  
G J Macfarlane ◽  
T Lilienthal ◽  
R J Ballantyne ◽  
S Ballantyne

The Floating Harbour Transhipper (FHT) is a pioneering logistics solution that was designed to meet the growing demands for coastal transhipment in the mining sector as well as commercial port operations. The primary advantage of the FHT system is that it can reduce transhipment delays caused by inclement weather, by reducing relative motions between the FHT and feeder vessel. The feeder is sheltered when inside the FHT well dock when compared to the more exposed location when a feeder is in a traditional side-by-side mooring arrangement. This paper discusses previously published studies into the relative motions of vessels engaged in side-by-side mooring arrangements and also presents details and results from a series of physical scale model experiments. In these experiments, both side-by-side and aft well dock mooring arrangements are investigated. The results provide strong evidence that the FHT well dock concept can significantly reduce the heave, pitch and roll motions of feeder vessels when transhipping in open seas – this being the cornerstone of any successful open water transhipment operation.



2021 ◽  
Author(s):  
Lars Kolbowski ◽  
Swantje Lenz ◽  
Lutz Fischer ◽  
Ludwig R Sinn ◽  
Francis J O'Reilly ◽  
...  

Proteome-wide crosslinking mass spectrometry studies have coincided with the advent of MS-cleavable crosslinkers that can reveal the individual masses of the two crosslinked peptides. However, recently such studies have also been published with non-cleavable crosslinkers suggesting that MS-cleavability is not essential. We therefore examined in detail the advantages and disadvantages of using the most popular MS-cleavable crosslinker, DSSO. Indeed, DSSO gave rise to signature peptide fragments with a distinct mass difference (doublet) for nearly all identified crosslinked peptides. Surprisingly, we could show that it was not these peptide masses that proved the main advantage of MS-cleavability of the crosslinker, but improved peptide backbone fragmentation that allowed for more confident peptide identification. We also show that the more intricate MS3-based data acquisition approaches lack sensitivity and specificity, causing them to be outperformed by the simpler and faster stepped HCD method. This understanding will guide future developments and applications of proteome-wide crosslinking mass spectrometry.



2021 ◽  
Vol 118 (37) ◽  
pp. e2022204118
Author(s):  
Tanner J. Corrado ◽  
Zihan Huang ◽  
Dezhao Huang ◽  
Noah Wamble ◽  
Tengfei Luo ◽  
...  

Polymers of intrinsic microporosity (PIMs) have shown promise in pushing the limits of gas separation membranes, recently redefining upper bounds for a variety of gas pair separations. However, many of these membranes still suffer from reductions in permeability over time, removing the primary advantage of this class of polymer. In this work, a series of pentiptycene-based PIMs incorporated into copolymers with PIM-1 are examined to identify fundamental structure–property relationships between the configuration of the pentiptycene backbone and its accompanying linear or branched substituent group. The incorporation of pentiptycene provides a route to instill a more permanent, configuration-based free volume, resistant to physical aging via traditional collapse of conformation-based free volume. PPIM-ip-C and PPIM-np-S, copolymers with C- and S-shape backbones and branched isopropoxy and linear n-propoxy substituent groups, respectively, each exhibited initial separation performance enhancements relative to PIM-1. Additionally, aging-enhanced gas permeabilities were observed, a stark departure from the typical permeability losses pure PIM-1 experiences with aging. Mixed-gas separation data showed enhanced CO2/CH4 selectivity relative to the pure-gas permeation results, with only ∼20% decreases in selectivity when moving from a CO2 partial pressure of ∼2.4 to ∼7.1 atm (atmospheric pressure) when utilizing a mixed-gas CO2/CH4 feed stream. These results highlight the potential of pentiptycene’s intrinsic, configurational free volume for simultaneously delivering size-sieving above the 2008 upper bound, along with exceptional resistance to physical aging that often plagues high free volume PIMs.



2021 ◽  
pp. 1-26
Author(s):  
Jeff Z. Pan ◽  
Elspeth Edelstein ◽  
Patrik Bansky ◽  
Adam Wyner

Abstract Recent success of knowledge graphs has spurred interest in applying knowledge graphs in open science, such as on intelligent survey systems for scientists. However, efforts to understand the quality of candidate survey questions provided by these methods have been limited. Indeed, existing methods do not consider the type of on-the-fly content planning that is possible for face-to-face surveys and hence do not guarantee that selection of subsequent questions is based on response to previous questions in a survey. To address this limitation, we propose a dynamic and informative solution for an intelligent survey system that is based on knowledge graphs. To illustrate our proposal, we look into social science surveys, focusing on ordering the questions of a questionnaire component by their level of acceptance, along with conditional triggers that further customise participants’ experience. Our main findings are: (i) evaluation of the proposed approach shows that the dynamic component can be beneficial in terms of lowering the number of questions asked per variable, thus allowing more informative data to be collected in a survey of equivalent length; and (ii) a primary advantage of the proposed approach is that it enables grouping of participants according to their responses, so that participants are not only served appropriate follow-up questions, but their responses to these questions may be analysed in the context of some initial categorisation. We believe that the proposed approach can easily be applied to other social science surveys based on grouping definitions in their contexts. The knowledge-graph-based intelligent survey approach proposed in our work allows online questionnaires to approach face-to-face interaction in their level of informativity and responsiveness, as well as duplicating certain advantages of interview-based data collection.



2021 ◽  
Vol 51 (2) ◽  
pp. 109-127
Author(s):  
Peter FABO ◽  
Pavol NEJEDLÍK ◽  
Michal KUBA ◽  
Milan ONDERKA ◽  
Dušan PODHORSKÝ

Hydrometeors (rain, fog and ice crystals) affect the transmission of electromagnetic signals. Previous research showed that alterations in the signal (amplitude and phase) are affected by the composition of the atmosphere, e.g. the presence of hydrometeors. The majority of hydrometeorological detecting methods are based on the attenuation of electromagnetic signals as they penetrate the atmosphere. Novel methods based on monitoring of parameters of the signal appeared in recent time. This article presents the first results from our investigation of how hydrometeors affect the phase differences in signals transmitted by BTS stations. Cell phone operators transmit electromagnetic signals in the 1 GHz frequency band. This paper describes a novel concept of how phase differences between two signals arriving at two different antennas can be used to detect hydrometeors. Although the described concept is assumed to be independent from the signal strength, the analysed signal must be detectable. The primary advantage of the proposed passive method is that the signal is almost ubiquitous and does not require demodulation. In densely populated areas, the network of BTS stations reaches a spatial density of 1 station per 1 km2 which gives excellent opportunity to use the signal for detection purposes.



2021 ◽  
Vol 11 (1) ◽  
Author(s):  
B. Yadidya ◽  
A. D. Rao ◽  
Sachiko Mohanty

AbstractThe changes in the physical properties of the ocean on a diurnal scale primarily occur in the surface mixed layer and the pycnocline. Price–Weller–Pinkel model, which modifies the surface mixed layer, and the internal wave model based on Garrett–Munk spectra that calculates the vertical displacements due to internal waves are coupled to simulate the diurnal variability in temperature and salinity, and thereby density profiles. The coupled model is used to simulate the hourly variations in density at RAMA buoy (15° N, 90° E), in the central Bay of Bengal, and at BD12 (10.5° N, 94° E), in the Andaman Sea. The simulations are validated with the in-situ observations from December 2013 to November 2014. The primary advantage of this model is that it could simulate spatial variability as well. An integrated model is also tested and validated by using the output of the 3D model to initialize the coupled model during January, April, July, and October. The 3D model can be used to initialize the coupled model at any given location within the model domain to simulate the diurnal variability of density. The simulations showed promising results which could be further used in simulating the acoustic fields and propagation losses which are crucial for Navy operations.



Author(s):  
Cristian Challu ◽  
Christian Poppeliers ◽  
Predrag Punoševac ◽  
Artur Dubrawski

ABSTRACT We present a new method to discriminate between earthquakes and buried explosions using observed seismic data. The method is different from previous seismic discrimination algorithms in two main ways. First, we use seismic spatial gradients, as well as the wave attributes estimated from them (referred to as gradiometric attributes), rather than the conventional three-component seismograms recorded on a distributed array. The primary advantage of this is that a gradiometer is only a fraction of a wavelength in aperture compared with a conventional seismic array or network. Second, we use the gradiometric attributes as input data into a machine learning algorithm. The resulting discrimination algorithm uses the norms of truncated principal components obtained from the gradiometric data to distinguish the two classes of seismic events. Using high-fidelity synthetic data, we show that the data and gradiometric attributes recorded by a single seismic gradiometer performs as well as a conventional distributed array at the event type discrimination task.



2021 ◽  
Author(s):  
Moataz Dowaidar

The discovery of a genome-wide correlation with obesity-related genes hasrevealed new information about the genetics of obesity. Given the lowproportion of obesity heritability explained by available SNPs, it's not shockingthat these SNPs aren't scientifically effective as methods for assessing whowould acquire obesity. The roles of the majority of loci, the majority of whichmap to non-coding sequences, will take thorough analysis to determine theresponsible gene at each locus, which may not be the closest gene. Thismechanistic information, as well as the resulting elucidation of thepathophysiology of obesity, will allow the creation of new therapies, whichcould be the primary advantage of these genetic discoveries.Fortunately, a lack of mechanistic information hasn't stopped researchers fromusing SNPs and genetic risk ratings to shed light on how obesity biologyinteracts with environmental and lifestyle influences. These findings suggestthat an unhealthy lifestyle may amplify the genetic risk of obesity, despite thefact that environmental studies of obesity genes may be distorted byinaccuracies in diet and physical activity measurement. More research isrequired to confirm this theory and to identify the specific dietary components(such as sugar-sweetened beverages) that interfere with genetic variants. Thisstudy could contribute to personalized obesity prevention and care measures inthe future (pending confirmation in clinical trials of genetic-risk-guidedinterventions). Obesity genetics has offered researchers the opportunity toexamine causal interactions between obesity and its various possiblecomplications. However, since the majority of the studies discussed above wereconducted on people of European ethnicity, more research is required inminority ethnic groups with a high risk of obesity to understand the role ofbiology, climate, and relationships among these factors in explaining theirincreased risk.



2021 ◽  
Author(s):  
Riccardo Storti

A precise & unambiguous mathematical definition of Cyber-Risk is developed, yielding an experimentally validated solution demonstrating ‘How to Predict & Measure Cyber-Risk’ for any Internet Connected Information System (ICIS) to greater than 98.07% accuracy. Moreover, it is shown that the solution holds for all scales of ICIS, from an Application level to an Enterprise level. In addition, it is shown that Test Effort Estimation (TEE) quantifies Cyber-Confidence, which in turn quantifies Cyber-Risk. Hence, TEE is a Mission Critical Activity (MCA) when formulating Cyber-Risk Management Strategies & may be utilised prior to project commencement, in-flight or post facto as an assessment &/or auditing tool. The TEE Model Construct developed is a statistical based methodology whereby the evaluations/decisions made, result in the contraction or expansion of the ‘z-Score’ associated with an infinite population of database records. The primary advantage of this approach is that very little information is required client-side at the engagement stage in order to produce peer acceptable estimates of the required test effort, & to accurately predict & measure the associated Cyber-Risk. This approach empowers clients & service providers to precisely define whatever level of Cyber-Risk is to be contractually delivered, capable of being absorbed, or prepared to be absorbed by consensus. With the aid of a decision table, estimators are able to articulate & convey to the appropriate authorities, various levels of Cyber-Risk commensurate with the available resources. The TEE Model Construct developed, presents an experimentally verified methodology, cognizant of commercial realities, yielding the following key advantages; (i) it requires minimal inputs, (ii) it has a scientific foundation, (iii) it facilitates operational decision-making, (iv) it quantifies Risk Based Testing (RBT), (v) it is simple, robust, flexible, consistent, reusable & transparent, (vi) it is capable of scaling a projected solution from a known solution, (vii) it embraces Continuous Improvement Processes (CIP’s), (viii) it confines perceptual subjectivity predominantly to three variables & (ix), it commercially exists as an off-the-shelf product. <br>



2021 ◽  
Author(s):  
Riccardo Storti

A precise & unambiguous mathematical definition of Cyber-Risk is developed, yielding an experimentally validated solution demonstrating ‘How to Predict & Measure Cyber-Risk’ for any Internet Connected Information System (ICIS) to greater than 98.07% accuracy. Moreover, it is shown that the solution holds for all scales of ICIS, from an Application level to an Enterprise level. In addition, it is shown that Test Effort Estimation (TEE) quantifies Cyber-Confidence, which in turn quantifies Cyber-Risk. Hence, TEE is a Mission Critical Activity (MCA) when formulating Cyber-Risk Management Strategies & may be utilised prior to project commencement, in-flight or post facto as an assessment &/or auditing tool. The TEE Model Construct developed is a statistical based methodology whereby the evaluations/decisions made, result in the contraction or expansion of the ‘z-Score’ associated with an infinite population of database records. The primary advantage of this approach is that very little information is required client-side at the engagement stage in order to produce peer acceptable estimates of the required test effort, & to accurately predict & measure the associated Cyber-Risk. This approach empowers clients & service providers to precisely define whatever level of Cyber-Risk is to be contractually delivered, capable of being absorbed, or prepared to be absorbed by consensus. With the aid of a decision table, estimators are able to articulate & convey to the appropriate authorities, various levels of Cyber-Risk commensurate with the available resources. The TEE Model Construct developed, presents an experimentally verified methodology, cognizant of commercial realities, yielding the following key advantages; (i) it requires minimal inputs, (ii) it has a scientific foundation, (iii) it facilitates operational decision-making, (iv) it quantifies Risk Based Testing (RBT), (v) it is simple, robust, flexible, consistent, reusable & transparent, (vi) it is capable of scaling a projected solution from a known solution, (vii) it embraces Continuous Improvement Processes (CIP’s), (viii) it confines perceptual subjectivity predominantly to three variables & (ix), it commercially exists as an off-the-shelf product. <br>



Sign in / Sign up

Export Citation Format

Share Document