Maximize Investment Rewards : Investigating the Effect of Field Characteristic on the Optimal Subsea Processing Solution

2016 ◽  
Author(s):  
D. Sandy ◽  
Z. Hasan
2021 ◽  
Vol 11 (15) ◽  
pp. 7169
Author(s):  
Mohamed Allouche ◽  
Tarek Frikha ◽  
Mihai Mitrea ◽  
Gérard Memmi ◽  
Faten Chaabane

To bridge the current gap between the Blockchain expectancies and their intensive computation constraints, the present paper advances a lightweight processing solution, based on a load-balancing architecture, compatible with the lightweight/embedding processing paradigms. In this way, the execution of complex operations is securely delegated to an off-chain general-purpose computing machine while the intimate Blockchain operations are kept on-chain. The illustrations correspond to an on-chain Tezos configuration and to a multiprocessor ARM embedded platform (integrated into a Raspberry Pi). The performances are assessed in terms of security, execution time, and CPU consumption when achieving a visual document fingerprint task. It is thus demonstrated that the advanced solution makes it possible for a computing intensive application to be deployed under severely constrained computation and memory resources, as set by a Raspberry Pi 3. The experimental results show that up to nine Tezos nodes can be deployed on a single Raspberry Pi 3 and that the limitation is not derived from the memory but from the computation resources. The execution time with a limited number of fingerprints is 40% higher than using a classical PC solution (value computed with 95% relative error lower than 5%).


2017 ◽  
Vol 5 (14) ◽  
pp. 3568-3578 ◽  
Author(s):  
Dong Gao ◽  
Zhihui Chen ◽  
Jianyao Huang ◽  
Weifeng Zhang ◽  
Congyuan Wei ◽  
...  

The performance of polymer field-effect transistors was enhanced by microstructure engineering through the use of a bi-component solvent.


1968 ◽  
Vol 21 (9) ◽  
pp. 2247 ◽  
Author(s):  
JW Clark-Lewis ◽  
RW Jemison

2'-Hydroxychalcones and α-alkoxy-2'-hydroxychalcones are converted by sodium borohydride in isopropanol into flav-3-enes and 3-alkoxyflav-3-enes in the convenient new synthesis which makes these flavenes readily available. Catalytic reduction of the flavenes gives the corresponding flavans or 3-alkoxyflavans in high yield, and the latter are obtained mainly in the 2,s-cis-form. The flavenes immediately give flavs lium cations in the cold when treated with acids in air, and oxidation of 5,7,3',4'-tetramethoxyflav-3-ene with benzoquinone in an acidic medium gave the flavylium salt, isolated as the ferrichloride. Reduction of 5,7,3',4'-tetramethoxy-flavylium chloride with lithium aluminium hydride gave 5,7,3',4'-tetramethoxy-flav-2-ene identical with the flavene obtained from (-)-epicatechin tetramethyl ether, and confirms an earlier investigation by Gramshaw, Johnson, and King. In its N.M.R. spectrum the heterocyclic-ring protons of this flav-2-ene give an ABX multiplet which is easily distinguished from the ABX multiplet at much lower field characteristic of flav-3-enes.


2021 ◽  
Author(s):  
Huseyin Denli ◽  
Hassan A Chughtai ◽  
Brian Hughes ◽  
Robert Gistri ◽  
Peng Xu

Abstract Deep learning has recently been providing step-change capabilities, particularly using transformer models, for natural language processing applications such as question answering, query-based summarization, and language translation for general-purpose context. We have developed a geoscience-specific language processing solution using such models to enable geoscientists to perform rapid, fully-quantitative and automated analysis of large corpuses of data and gain insights. One of the key transformer-based model is BERT (Bidirectional Encoder Representations from Transformers). It is trained with a large amount of general-purpose text (e.g., Common Crawl). Use of such a model for geoscience applications can face a number of challenges. One is due to the insignificant presence of geoscience-specific vocabulary in general-purpose context (e.g. daily language) and the other one is due to the geoscience jargon (domain-specific meaning of words). For example, salt is more likely to be associated with table salt within a daily language but it is used as a subsurface entity within geosciences. To elevate such challenges, we retrained a pre-trained BERT model with our 20M internal geoscientific records. We will refer the retrained model as GeoBERT. We fine-tuned the GeoBERT model for a number of tasks including geoscience question answering and query-based summarization. BERT models are very large in size. For example, BERT-Large has 340M trained parameters. Geoscience language processing with these models, including GeoBERT, could result in a substantial latency when all database is processed at every call of the model. To address this challenge, we developed a retriever-reader engine consisting of an embedding-based similarity search as a context retrieval step, which helps the solution to narrow the context for a given query before processing the context with GeoBERT. We built a solution integrating context-retrieval and GeoBERT models. Benchmarks show that it is effective to help geologists to identify answers and context for given questions. The prototype will also produce a summary to different granularity for a given set of documents. We have also demonstrated that domain-specific GeoBERT outperforms general-purpose BERT for geoscience applications.


Sign in / Sign up

Export Citation Format

Share Document