scholarly journals Optimising Batting Partnership Strategy in the First Innings of a Limited Overs Cricket Match

2021 ◽  
Author(s):  
◽  
Patrick Brown

<p>In cricket, the better an individual batsman or batting partnership performs, the more likely the team is to win. Quantifying batting performance is therefore fundamental to help with in-game decisions, to optimise team performance and maximise chances of winning. Several within-game metrics exist to summarise individual batting performances in cricket. However, these metrics summarise individual performance and do not account for partnership performance. An expectation of how likely a batting partnership is to survive each ball within an innings can enable more effective partnership strategies to optimise a team’s final total.  The primary objective of this research was to optimise batting partnership strategy by formulating several predictive models to calculate the probability of a batting partnership being dismissed in the first innings of a limited overs cricket match. The narrowed focus also reduced confounding factors, such as match state. More importantly, the results are of practical significance and provide new insight into how an innings evolves.  The model structures were expected to reveal strategies for optimally setting a total score for the opposition to chase. In the first innings of a limited overs cricket match, there is little information available at the commencement and during the innings to guide the team in accumulating a winning total score.  The secondary objective of this research was to validate the final models to ensure they were appropriately estimating the ball-by-ball survival probabilities of each batsman, in order to determine the most effective partnership combinations. The research hypothesised that the more effective a batting partnership is at occupying the crease, the more runs they will score at an appropriate rate and the more likely the team is to win the match, by setting a defendable total.  Data were split into subsets based on the batting position or wicket. Cox proportional hazard models and ridge regression techniques were implemented to consider the potential effect of eight batting partnership performance predictor variables on the ball-by-ball probability of a batting partnership facing the next ball without being dismissed. The Area Under the Curve (AUC) was implemented as a performance measure used to rank the batting partnerships.  Based on One-Day International (ODI) games played between 26th December 2013 and 14th February 2016, the model for opening batting partnerships ranked Pakistani’s A Ali and S Aslam as the optimal opening batting partnership. This method of calculating batting partnership rankings is also positively correlated with typical measures of success: average runs scored, proportion of team runs scored and winning. These findings support the research hypothesis. South African’s, HM Amla and AB de Villiers are ranked as the optimal partnership at wicket two. As at 28th February 2016, these batsmen were rated 6th equal and 2nd in the world respectively. More importantly, these results show that this pair enable South Africa to maximise South Africa’s chances of winning, by setting a total in an optimal manner.  New Zealand captain, Kane Williamson, is suggested as the optimal batsman to bat in position three regardless of which opener is dismissed. Reviewing New Zealand’s loss against Australia on 4th December 2016, indicates a suboptimal order was used with JDS Neesham and BJ Watling batting at four and five respectively. Given the circumstances, C Munro and C de Grandhomme were quantified as a more optimal order.  The results indicate that for opening batsmen, better team results are obtained when consecutive dot balls are minimised. For top order and middle order batsmen, this criteria is relaxed with the emphasis on their contribution to the team. Additionally, for middle order batsmen, minimising the occasions where 2 runs or less are scored within 4 deliveries is important.  In order to validate the final models, each one was applied to the corresponding Indian Premier League (IPL) 2016 data. These models were used to generate survival probabilities for IPL batting partnerships. The probabilities were then plotted against survival probabilities for ODI batting partnerships at the same wicket. The AUC was calculated as a metric to determine which models generated survival probabilities characterising the largest difference between IPL partnerships and ODI partnerships. All models were validated by successfully demonstrating the ability of these models to distinguish between higher survival probabilities for ODI partnerships compared with IPL partnerships at the same wicket.  This research has successfully determined ball-by-ball survival probabilities for individual batsmen and batting partnerships in limited overs cricket games. Additionally, the work has provided a rigorous quantitative framework for optimising team performance.</p>

2021 ◽  
Author(s):  
◽  
Patrick Brown

<p>In cricket, the better an individual batsman or batting partnership performs, the more likely the team is to win. Quantifying batting performance is therefore fundamental to help with in-game decisions, to optimise team performance and maximise chances of winning. Several within-game metrics exist to summarise individual batting performances in cricket. However, these metrics summarise individual performance and do not account for partnership performance. An expectation of how likely a batting partnership is to survive each ball within an innings can enable more effective partnership strategies to optimise a team’s final total.  The primary objective of this research was to optimise batting partnership strategy by formulating several predictive models to calculate the probability of a batting partnership being dismissed in the first innings of a limited overs cricket match. The narrowed focus also reduced confounding factors, such as match state. More importantly, the results are of practical significance and provide new insight into how an innings evolves.  The model structures were expected to reveal strategies for optimally setting a total score for the opposition to chase. In the first innings of a limited overs cricket match, there is little information available at the commencement and during the innings to guide the team in accumulating a winning total score.  The secondary objective of this research was to validate the final models to ensure they were appropriately estimating the ball-by-ball survival probabilities of each batsman, in order to determine the most effective partnership combinations. The research hypothesised that the more effective a batting partnership is at occupying the crease, the more runs they will score at an appropriate rate and the more likely the team is to win the match, by setting a defendable total.  Data were split into subsets based on the batting position or wicket. Cox proportional hazard models and ridge regression techniques were implemented to consider the potential effect of eight batting partnership performance predictor variables on the ball-by-ball probability of a batting partnership facing the next ball without being dismissed. The Area Under the Curve (AUC) was implemented as a performance measure used to rank the batting partnerships.  Based on One-Day International (ODI) games played between 26th December 2013 and 14th February 2016, the model for opening batting partnerships ranked Pakistani’s A Ali and S Aslam as the optimal opening batting partnership. This method of calculating batting partnership rankings is also positively correlated with typical measures of success: average runs scored, proportion of team runs scored and winning. These findings support the research hypothesis. South African’s, HM Amla and AB de Villiers are ranked as the optimal partnership at wicket two. As at 28th February 2016, these batsmen were rated 6th equal and 2nd in the world respectively. More importantly, these results show that this pair enable South Africa to maximise South Africa’s chances of winning, by setting a total in an optimal manner.  New Zealand captain, Kane Williamson, is suggested as the optimal batsman to bat in position three regardless of which opener is dismissed. Reviewing New Zealand’s loss against Australia on 4th December 2016, indicates a suboptimal order was used with JDS Neesham and BJ Watling batting at four and five respectively. Given the circumstances, C Munro and C de Grandhomme were quantified as a more optimal order.  The results indicate that for opening batsmen, better team results are obtained when consecutive dot balls are minimised. For top order and middle order batsmen, this criteria is relaxed with the emphasis on their contribution to the team. Additionally, for middle order batsmen, minimising the occasions where 2 runs or less are scored within 4 deliveries is important.  In order to validate the final models, each one was applied to the corresponding Indian Premier League (IPL) 2016 data. These models were used to generate survival probabilities for IPL batting partnerships. The probabilities were then plotted against survival probabilities for ODI batting partnerships at the same wicket. The AUC was calculated as a metric to determine which models generated survival probabilities characterising the largest difference between IPL partnerships and ODI partnerships. All models were validated by successfully demonstrating the ability of these models to distinguish between higher survival probabilities for ODI partnerships compared with IPL partnerships at the same wicket.  This research has successfully determined ball-by-ball survival probabilities for individual batsmen and batting partnerships in limited overs cricket games. Additionally, the work has provided a rigorous quantitative framework for optimising team performance.</p>


Author(s):  
Ryan D McMullan ◽  
Rachel Urwin ◽  
Peter Gates ◽  
Neroli Sunderland ◽  
Johanna I Westbrook

Abstract Background The operating room (OR) is a complex environment in which distractions, interruptions, and disruptions (DIDs) are frequent. Our aim was to synthesise research on the relationships between DIDs and (a) operative duration, (b) team performance, (c) individual performance, and (d) patient safety outcomes; in order to better understand how interventions can be designed to mitigate the negative effects of DIDs. Methods Electronic databases (MEDLINE, Embase, CINAHL, PsycINFO) and reference lists were systematically searched. Included studies were required to report quantitative outcomes of the association between DIDs and team performance, individual performance, and patient safety. Two reviewers independently screened articles for inclusion, assessed study quality, and extracted data. A random effects meta-analysis was performed on a subset of studies reporting total operative time and DIDs. Results Twenty-seven studies were identified. The majority were prospective observational studies (n=15), of moderate quality (n=15). DIDs were often defined, measured, and interpreted differently in studies. DIDs were significantly associated with: extended operative duration (n=8), impaired team performance (n=6), self-reported errors by colleagues (n=1), surgical errors (n=1), increased risk and incidence of surgical site infection (n=4), and fewer patient safety checks (n=1). A random effects meta-analysis showed that the proportion of total operative time due to DIDs was 22.0% (95% CI 15.7-29.9). Conclusion DIDs in surgery are associated with a range of negative outcomes. However, significant knowledge gaps exist about the mechanisms that underlie these relationships, as well as the potential clinical and non-clinical benefits that DIDs may deliver. Available evidence indicates that interventions to reduce the negative effects of DIDs are warranted, but current evidence is not sufficient to make recommendations about potentially useful interventions.


1997 ◽  
Vol 23 (6) ◽  
pp. 745-757 ◽  
Author(s):  
Diana L. Deadrick ◽  
Nathan Bennett ◽  
Craig J. Russell

The selection literature has long debated the theoretical and practical significance of dynamic criteria. Recent research has begun to explore the nature of individual performance over time. This study contributes to this body of research through a hierarchical linear modeling analysis of dynamic criteria. The purpose of this study was to investigate the role of ability in explaining initial job performance, as well as the rate of improvement-or performance trend-among a sample of 408 sewing machine operators over a 24 week period. The results of a hierarchical linear modeling analysis suggest that ability measures are differentially related to initial performance and performance improvement trend.


2020 ◽  
Vol 95 (6) ◽  
pp. 181-212
Author(s):  
Jonathan C. Glover ◽  
Hao Xue

ABSTRACT Teamwork and team incentives are increasingly prevalent in modern organizations. Performance measures used to evaluate individuals' contributions to teamwork are often non-verifiable. We study a principal-multi-agent model of relational (self-enforcing) contracts in which the optimal contract resembles a bonus pool. It specifies a minimum joint bonus floor the principal is required to pay out to the agents, and gives the principal discretion to use non-verifiable performance measures to both increase the size of the pool and to allocate the pool to the agents. The joint bonus floor is useful because of its role in motivating the agents to mutually monitor each other by facilitating a strategic complementarity in their payoffs. In an extension section, we introduce a verifiable team performance measure that is a noisy version of the individual non-verifiable measures, and show that the verifiable measure is either ignored or used to create a conditional bonus floor.


1992 ◽  
Vol 36 (17) ◽  
pp. 1342-1345
Author(s):  
Mary D Zalesny

What if we took seriously the fact that team performance is not synonymous with individual performance? Although teams appear to be the new workhorses of economic and social goal accomplishment, the processes by which they accomplish their goals remains relatively unexplicated and not well understood. In this paper, we argue that coordination is an important unifying construct for defining, measuring, researching, and training effective team performance.


Author(s):  
Srabasti Chatterjee

Purpose The major focus in the current scenario in organizational settings has shifted from individual performance to team performance. The current study investigates team performance and its antecedents from both social and cognitive dimensions and hence provides a qualitative and synopsis of the same. There is one such antecedent transactive memory which collectively looks into both the facets. For more than a decade after the very emergence of this concept, a plethora of work has been done to relate team performance and transactive memory. In an attempt to understand both these multi-dimensional constructs, and to comprehend the interrelationships in a better way, this paper aims to analyze the impact of transactive memory on team performance and how to improve it in organizations. Design/methodology/approach The paper is purely conceptual. So it uses other earlier studies to make necessary propositions. Findings The present study tries to qualitatively analyze the impact of transactive memory on team performance with respect to the various dimensions of team performance both task process and relational performance. The results of the study show a positive relationship between the three dimensions of transactive memory – credibility, consensus and specilaization and team performance. The study also provides recommendations to improvise transactive memory in organizations. Research limitations/implications The paper is not empirical, so further empirical analysis could enrich the results. Originality/value The paper is original in terms of giving solutions to increase transactive memory in organizational set up.


Author(s):  
Dietlind Helene Cymek

Background: In safety-critical and highly automated environments, more than one person typically monitors the system in order to increase reliability. Objective: We investigate whether the anticipated advantage of redundant automation monitoring is lost due to social loafing and whether individual performance feedback can mitigate this effect. Method: In two experiments, participants worked on a multitasking paradigm in which one task was the monitoring and cross-checking of an automation. Participants worked either alone or with a team partner on this task. The redundant group was further subdivided. One subgroup was instructed that only team performance would be evaluated, whereas the other subgroup expected to receive individual performance feedback after the experiment. Results: Compared to participants working alone, those who worked collectively but did not expect individual feedback performed significantly less cross-checks and found 25% fewer automation failures. Due to this social loafing effect, even the combined team performance did not surpass the performance of participants working alone. However, when participants expected individual performance feedback, their monitoring behavior and failure detection performance was similar to participants working alone and a team advantage became apparent. Conclusion: Social loafing in redundant automation monitoring can negate the expected gain, if individual performance feedback is not provided. Application: These findings may motivate safety experts to evaluate whether their implementation of human redundancy is vulnerable to social loafing effects.


1980 ◽  
Vol 24 (1) ◽  
pp. 536-536
Author(s):  
Robert C. Williges ◽  
Beverly H. Williges

Many complex, computer-based systems are characterized as requiring successful team rather than individual performance. In systems such as combat information centers, air traffic control centers, and aircrew cockpits, the various individuals must coordinate their performance with other individuals in a relatively rigid task and communication structure in order to complete their mission successfully. Given the widespread existence of requirements for team functioning, it is surprising that the research literature dealing with team performance is so limited.


Scientifica ◽  
2016 ◽  
Vol 2016 ◽  
pp. 1-5 ◽  
Author(s):  
Dimitrios Papoutsis ◽  
Angeliki Antonakou ◽  
Chara Tzavara

Objective. To identify the potential effect of ethnic variation on the success of induction of labour in nulliparous women with postdates pregnancies.Study Design. This was an observational cohort study of women being induced for postdates pregnancies (≥41 weeks) between 2007 and 2013. Women induced for stillbirths and with multiple pregnancies were excluded. The primary objective was to identify the effect of ethnicity on the caesarean section (CS) delivery rates in this cohort of women.Results. 1,636 nulliparous women were identified with a mean age of 27.2 years. 95.8% of the women were of White ethnic origin, 2.6% were Asian, and 1.6% were of Black ethnic origin. The CS delivery rate was 24.4% in the total sample. Women of Black ethnic origin had a 3.26 times greater likelihood for CS in comparison to White women, after adjusting for maternal age, BMI, smoking, presence of meconium, use of epidural analgesia, fetal gender, birth weight, and head circumference (adjusted OR = 3.26; 95% CI: 1.31–8.08,p= 0.011).Conclusion. We have found that nulliparous women of Black ethnicity demonstrate an almost threefold increased risk of caesarean section delivery when induced for postdates pregnancy.


Sign in / Sign up

Export Citation Format

Share Document