execution strategy
Recently Published Documents


TOTAL DOCUMENTS

118
(FIVE YEARS 27)

H-INDEX

9
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Syakira Saadon ◽  
Norhazrin Azmi ◽  
Prabagar Murukesavan ◽  
Norsham Nordin ◽  
Salman Saad

Abstract Petroliam Nasional Berhad (PETRONAS) is embarking on the implementation of the Design One Build Many (D1BM) concept, an integrated approach on design standardization, replication and volume consolidation for light weight fit for purpose wellhead platforms - also known as Lightweight Structure (LWS). The objective of the standardization is to enable monetization of marginal and small fields by improving project economics that are challenged with the high development costs and conventional execution schedules. Traditionally, projects are developed through a "bespoke" design which requires a specific engineering study during the Front End Loading (FEL) phase to cater for the field specific requirements. In addition, once the project has been sanctioned, it is a must to undergo tendering and bidding activities which can increase field monetization duration by four to five months. The current "bespoke" design has resulted in non-standardization, loss of opportunity for volume consolidation and ultimately longer time for field monetization. Although the Design One Build Many principles were known for a long time, but they were rather project oriented. Thus this emerging solution is a result of synthesizing multiple challenges with the goal to establish an end-to-end systematic approach in monetizing marginal and small fields by lowering development cost and monetization duration. There will be standardized sets of Base Design and a flexible Catalogue items to cater for standardized add on items. Lessons learned incorporation upon the repeated design and standardized execution strategy including Engineering, Procurement, Construction, Installation and Commissioning could also help in improving the delivery efficiency for the lightweight structure. The greater collaboration across fields and blocks will give significant added advantage through economies of scale efficiency and eventually increase in the overall project value.


2021 ◽  
Vol 13 (19) ◽  
pp. 4014
Author(s):  
Lara Fernandez ◽  
Joan Adria Ruiz-de-Azua ◽  
Anna Calveras ◽  
Adriano Camps

Natural disasters and catastrophes are responsible for numerous casualties and important economic losses. They can be monitored either with in-situ or spaceborne instruments. However, these monitoring systems are not optimal for an early detection and constant monitoring. An optimisation of these systems could benefit from networks of Internet of Things (IoT) sensors on the Earth’s surface, capable of automatically triggering on-demand executions of the spaceborne instruments. However, having a vast amount of sensors communicating at once with one satellite in view also poses a challenge in terms of the medium access layer (MAC), since, due to packet collisions, packet losses can occur. As part of this study, the monitoring requirements for an ideal spatial nodes density and measurement update frequencies of those sensors are provided. In addition, a study is performed to compare different MAC protocols, and to assess the sensors density that can be achieved with each of these protocols, using the LoRa technology, and concluding the feasibility of the monitoring requirements identified.


2021 ◽  
Vol 11 (19) ◽  
pp. 9271
Author(s):  
Heiko Engemann ◽  
Patrick Cönen ◽  
Harshal Dawar ◽  
Shengzhi Du ◽  
Stephan Kallweit

Wind energy represents the dominant share of renewable energies. The rotor blades of a wind turbine are typically made from composite material, which withstands high forces during rotation. The huge dimensions of the rotor blades complicate the inspection processes in manufacturing. The automation of inspection processes has a great potential to increase the overall productivity and to create a consistent reliable database for each individual rotor blade. The focus of this paper is set on the process of rotor blade inspection automation by utilizing an autonomous mobile manipulator. The main innovations include a novel path planning strategy for zone-based navigation, which enables an intuitive right-hand or left-hand driving behavior in a shared human–robot workspace. In addition, we introduce a new method for surface orthogonal motion planning in connection with large-scale structures. An overall execution strategy controls the navigation and manipulation processes of the long-running inspection task. The implemented concepts are evaluated in simulation and applied in a real-use case including the tip of a rotor blade form.


Author(s):  
MASAAKI FUKASAWA ◽  
MASAMITSU OHNISHI ◽  
MAKOTO SHIMOSHIMIZU

This paper examines a discrete-time optimal execution problem with generalized price impact. Our main objective is to investigate the effect of price impact caused by aggregate random trade orders posed by small traders on the optimal execution strategy when orders of the small traders have a Markovian dependence. Our problem is formulated as a Markov decision process with state variables which include the last small traders’ aggregate orders. Over a finite horizon, a large trader with Constant Absolute Risk Aversion (CARA) von Neumann–Morgenstern (vN-M) utility function maximizes the expected utility from the final wealth. By applying the backward induction method of dynamic programming, we characterize the optimal execution strategy and optimal value function and conclude that the optimal execution strategy is a time-dependent affine function of three state variables. Moreover, numerical analysis prevails that the optimal execution strategy admits a “statistical arbitrage” via a round-trip trading, although our model considers a linear permanent price impact. The result differs from the previous prevailing one that a linear permanent price impact model precludes any price manipulation or arbitrage. Thus, considering a price impact caused by small traders’ orders with a Markovian dependence is significant.


2021 ◽  
Author(s):  
Olivier Cousso ◽  
Ahmed Bilal ◽  
Anas Sikal ◽  
Fabien Momot ◽  
Matthew Cullen ◽  
...  

Abstract A new joint venture operator, established to take over an existing strategic producing field with ongoing drilling operations, took the opportunity to design a new collision avoidance standard, based on the latest WPTS (Wellbore Positioning Technical Section) probability method collision avoidance rules. This has been combined with an innovative execution approach to safely and successfully unlock slots on congested platforms and drill some of the most difficult well trajectories in this complex field from the very first well. Al Shaheen field, offshore Qatar, is one of the most challenging fields worldwide in terms of collision avoidance. When drilling extended-reach wells from the last-remaining and most challenging slots, with top-hole separation as low as three feet centre-to-centre at the conductor pipe shoe, close collaboration with all parties is required to manage collision risk, minimise production loss, and ensure all well objectives are achieved. The execution strategy includes simple jetting and rotating BHA designs for 3D-profile trajectories, remote real-time monitoring including 24/7 survey QA/QC and validation, and mitigation through a decision-making matrix customised for the specific drilling challenges. The platform configuration and challenges in the drilling environment are discussed, together with the theory of the selected collision avoidance rule and the resulting risk matrix. A brief review of why jetting is selected as the only allowable drilling technique in major risk situations plus the story of the evolution of Al Shaheen jetting BHAs follows. Finally, three case studies of top-hole operations describe the practical application of the techniques discussed. The selected case studies describe the jetting operation from the deepest CP (Conductor pipe), the deepest well jetted, and the first 23-in jetting operation carried out by the operator. The combination of risk analysis through genuine probabilistic considerations, jetting operations, and appropriate oversight has been used successfully for more than two years and has allowed over twenty of the remaining, most challenging, slots to be saved, ensuring the assets are optimised in the ongoing economically-constrained environment. The WPTS have now published their proposed industry-standard probability-based collision-avoidance rule. These case-history examples of a similar rule from extreme close-approach drilling will assist other operators considering uptake of the new guidelines, as will the risk matrix developed by the operator. In addition, the jetting technique used as a major mitigation factor is seldom used today in the industry and the lessons learned in jetting BHA design have already benefited another operator in the region.


2021 ◽  
Vol 14 (7) ◽  
pp. 1228-1240
Author(s):  
Dimitrije Jankov ◽  
Binhang Yuan ◽  
Shangyu Luo ◽  
Chris Jermaine

When numerical and machine learning (ML) computations are expressed relationally, classical query execution strategies (hash-based joins and aggregations) can do a poor job distributing the computation. In this paper, we propose a two-phase execution strategy for numerical computations that are expressed relationally, as aggregated join trees (that is, expressed as a series of relational joins followed by an aggregation). In a pilot run, lineage information is collected; this lineage is used to optimally plan the computation at the level of individual records. Then, the computation is actually executed. We show experimentally that a relational system making use of this two-phase strategy can be an excellent platform for distributed ML computations.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Yan Wang ◽  
Jixin Li ◽  
Wansheng Liu ◽  
Aiping Tan

Throughput performance is a critical issue in blockchain technology, especially in blockchain sharding systems. Although sharding proposals can improve transaction throughput by parallel processing, the essence of each shard is still a small blockchain. Using serial execution of smart contract transactions, performance has not significantly improved, and there is still room for improvement. A smart contract concurrent execution strategy based on concurrency degree optimization is proposed for performance optimization within a single shard. This strategy is applied to each shard. First, it characterizes the conflicting contract feature information by executing a smart contract, analyzing the factors that affect the concurrent execution of the smart contracts, and clustering the contract transaction. Second, in shards with high transaction frequency, considering the execution time, conflict rate, and available resources of contract transactions, finding a serializable schedule of contract transactions by redundant computation and a Variable Shadow Speculative Concurrency Control (SCC-VS) algorithm for smart contract scheduling is proposed. Finally, experimental results show that the strategy increases the concurrency of smart contract execution by 39% on average and the transaction throughput of the whole system by 21% on average.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Daniele Dall’Olio ◽  
Nico Curti ◽  
Eugenio Fonzi ◽  
Claudia Sala ◽  
Daniel Remondini ◽  
...  

Abstract Background Current high-throughput technologies—i.e. whole genome sequencing, RNA-Seq, ChIP-Seq, etc.—generate huge amounts of data and their usage gets more widespread with each passing year. Complex analysis pipelines involving several computationally-intensive steps have to be applied on an increasing number of samples. Workflow management systems allow parallelization and a more efficient usage of computational power. Nevertheless, this mostly happens by assigning the available cores to a single or few samples’ pipeline at a time. We refer to this approach as naive parallel strategy (NPS). Here, we discuss an alternative approach, which we refer to as concurrent execution strategy (CES), which equally distributes the available processors across every sample’s pipeline. Results Theoretically, we show that the CES results, under loose conditions, in a substantial speedup, with an ideal gain range spanning from 1 to the number of samples. Also, we observe that the CES yields even faster executions since parallelly computable tasks scale sub-linearly. Practically, we tested both strategies on a whole exome sequencing pipeline applied to three publicly available matched tumour-normal sample pairs of gastrointestinal stromal tumour. The CES achieved speedups in latency up to 2–2.4 compared to the NPS. Conclusions Our results hint that if resources distribution is further tailored to fit specific situations, an even greater gain in performance of multiple samples pipelines execution could be achieved. For this to be feasible, a benchmarking of the tools included in the pipeline would be necessary. It is our opinion these benchmarks should be consistently performed by the tools’ developers. Finally, these results suggest that concurrent strategies might also lead to energy and cost savings by making feasible the usage of low power machine clusters.


Sign in / Sign up

Export Citation Format

Share Document