scholarly journals Admissible dissimilarity value (ADV) as a measure of subsampling reliability: case study North Sea cod (Gadus morhua)

2020 ◽  
Vol 192 (12) ◽  
Author(s):  
Julia Wischnewski ◽  
Matthias Bernreuther ◽  
Alexander Kempf

AbstractThe shape of the length frequency distribution (LFD) is an important input for stock assessments and one of the most important features in studies of fish population dynamics, providing estimates of growth parameters. In practice, oversampling may occur when sampling commercially important species. At times of more and more limited resources, the length sample size can be optimized at some stages of national or regional sampling programmes, without reducing the quality of stock assessments. The main objective of this study is to demonstrate a general distribution-free methodological approach for an optimization of sample size developed as an alternative to both analytical and bootstrap approaches. A novel framework to identify the reduced but still informative sample and to quantify the (dis) similarity between reduced and original samples is proposed. The identification procedure is based on the concept of reference subsample, which represents a theoretical minimal representative subsample that despite smaller sample size still preserves a reasonably precise LFD for certain species. The difference between the original sample and the reference subsample called admissible dissimilarity value (ADV) serves as the upper threshold and can be used to quantify the reliability of derived subsamples. Monte Carlo simulations were conducted to validate the approach under various LFD shapes. We illustrate in case studies how ADV can support to evaluate adequate sampling effort. The case studies focus on length samples from the German commercial vessels fishing for North Sea cod (Gadus morhua).

2006 ◽  
Vol 63 (6) ◽  
pp. 961-968 ◽  
Author(s):  
Joe Horwood ◽  
Carl O'Brien ◽  
Chris Darby

AbstractRecovery of depleted marine, demersal, commercial fish stocks has proved elusive worldwide. As yet, just a few shared or highly migratory stocks have been restored. Here we review the current status of the depleted North Sea cod (Gadus morhua), the scientific advice to managers, and the recovery measures in place. Monitoring the progress of North Sea cod recovery is now hampered by considerable uncertainties in stock assessments associated with low stock size, variable survey indices, and inaccurate catch data. In addition, questions arise as to whether recovery targets are achievable in a changing natural environment. We show that current targets are achievable with fishing mortality rates that are compatible with international agreements even if recruitment levels remain at the current low levels. Furthermore, recent collations of data on international fishing effort have allowed estimation of the cuts in fishing mortality achieved by restrictions on North Sea effort. By the beginning of 2005, these restrictions are estimated to have reduced fishing mortality rates by about 37%. This is insufficient to ensure recovery of North Sea cod within the next decade.


2013 ◽  
Vol 70 (6) ◽  
pp. 1081-1084 ◽  
Author(s):  
Andreas Sundelöf ◽  
Håkan Wennhage ◽  
Henrik Svedäng

Abstract Sundelöf, A., Wennhage, H., and Svedäng, H. 2013. A red herring from the Öresund (ICES40G2): the apparent recovery of the Large Fish Indicator (LFI) in the North Sea hides a non-trawled area. – ICES Journal of Marine Science, 70: 1081–1084. As reported in a number of previous papers in this journal, the Large Fish Indicator (LFI) was developed for the North Sea. ICES Statistical Rectangle 40G2 was accidentally included in the North Sea calculations of LFI for 2004, 2007 and 2008. This inclusion significantly increased the LFI and was subsequently removed from the analysis. We identify and discuss three reasons to revisit rectangle 40G2 when considering LFI for the North Sea: (i) according to the Marine Strategy Framework Directive (MSFD), the area belongs to the North Sea, (ii) it is a geographically well-defined area where technical regulations have prevented the use of trawls since the 1930s, and (iii) there is evidence of a productive and rather closed cod (Gadus morhua) subpopulation unit in the area, which is an important species for the North Sea LFI.


2020 ◽  
pp. 002188632098271
Author(s):  
Denny Gioia

The Journal of Applied Behavioral Science is in the enviable position of being a go-to journal for many readers seeking useable insights for solving practical problems in managing modern organizations. A perennial source of such knowledge has been case studies, but case studies have been treated as questionable sources of widely applicable knowledge because they have been assumed to be idiosyncratic and to lack adequate “scientific” rigor. In this brief article, I argue for using a methodological approach to studying single cases that addresses both these thorny problems.


Author(s):  
Neil Bates ◽  
David Lee ◽  
Clifford Maier

This paper describes case studies involving crack detection in-line inspections and fitness for service assessments that were performed based on the inspection data. The assessments were used to evaluate the immediate integrity of the pipeline based on the reported features and the long-term integrity of the pipeline based on excavation data and probabilistic SCC and fatigue crack growth simulations. Two different case studies are analyzed, which illustrate how the data from an ultrasonic crack tool inspection was used to assess threats such as low frequency electrical resistance weld seam defects and stress corrosion cracking. Specific issues, such as probability of detection/identification and the length/depth accuracy of the tool, were evaluated to determine the suitability of the tool to accurately classify and size different types of defects. The long term assessment is based on the Monte Carlo method [1], where the material properties, pipeline details, crack growth parameters, and feature dimensions are randomly selected from certain specified probability distributions to determine the probability of failure versus time for the pipeline segment. The distributions of unreported crack-related features from the excavation program are used to distribute unreported features along the pipeline. Simulated crack growth by fatigue, SCC, or a combination of the two is performed until failure by either leak or rupture is predicted. The probability of failure calculation is performed through a number of crack growth simulations for each of the reported and unreported features and tallying their respective remaining lives. The results of the probabilistic analysis were used to determine the most effective and economical means of remediation by identifying areas or crack mechanisms that contribute most to the probability of failure.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Carole Lunny ◽  
Dawid Pieper ◽  
Pierre Thabet ◽  
Salmaan Kanji

Abstract Background Overviews often identify and synthesise a large number of systematic reviews on the same topic, which is likely to lead to overlap (i.e. duplication) in primary studies across the reviews. Using a primary study result multiple times in the same analysis overstates its sample size and number of events, falsely leading to greater precision in the analysis. This paper aims to: (a) describe types of overlapping data that arise from the same primary studies reported across multiple reviews, (b) describe methods to identify and explain overlap of primary study data, and (c) present six case studies illustrating different approaches to manage overlap. Methods We first updated the search in PubMed for methods from the MOoR framework relating to overlap of primary studies. One author screened the studies titles and abstracts, and any full-text articles retrieved, extracted methods data relating to overlap of primary studies and mapped it to the overlap methods from the MOoR framework. We also describe six case studies as examples of overviews that use specific overlap methods across the steps in the conduct of an overview. For each case study, we discuss potential methodological implications in terms of limitations, efficiency, usability, and resource use. Results Nine methods studies were found and mapped to the methods identified by the MOoR framework to address overlap. Overlap methods were mapped across four steps in the conduct of an overview – the eligibility criteria step, the data extraction step, the assessment of risk of bias step, and the synthesis step. Our overview case studies used multiple methods to reduce overlap at different steps in the conduct of an overview. Conclusions Our study underlines that there is currently no standard methodological approach to deal with overlap in primary studies across reviews. The level of complexity when dealing with overlap can vary depending on the yield, trends and patterns of the included literature and the scope of the overview question. Choosing a method might be dependent on the number of included reviews and their primary studies. Gaps in evaluation of methods to address overlap were found and further investigation in this area is needed.


2007 ◽  
Vol 64 (2) ◽  
pp. 304-313 ◽  
Author(s):  
Jan-Jaap Poos ◽  
Adriaan D Rijnsdorp

A temporarily closed area established to protect spawning Atlantic cod (Gadus morhua) in the North Sea allowed us to study the response of the Dutch beam trawl fleet exploiting common sole (Solea solea) and plaice (Pleuronectes platessa). A number of vessels left the North Sea 1 month earlier than the normal seasonal pattern. The vessels that continued fishing in the North Sea were concentrated in the remaining open areas. In the first week after the closure, the catch rate decreased by 14%, coinciding with an increase in crowding of 28%. Area specialisation affected the response of individual vessels because vessels without prior experience in the open areas showed a larger decline in catch rate compared with vessels that previously fished in these open areas and were more likely to stop fishing during the closed period. The decrease in catch rate in response to the increase in competitor density allowed us to estimate the strength of the interference competition.


10.28945/3025 ◽  
2006 ◽  
Author(s):  
Fernando Jose Barbin Laurindo ◽  
Renato de Oliveira Moraes

In the highly competitive nowadays markets, many companies actions assume the project form. In special, Information Technology (IT) projects assume great importance, enabling the dynamic actions that organisations need (Porter, 2001; Tapscott, 2001). However, IT applications assume different roles, from operational support to strategic, according to companies’ strategies and operations, besides the peculiarities of the industry in which they compete (McFarlan, 1984; Porter & Millar, 1985). According to this role (appraised by McFarlan’s Strategic Grid), ex-ante evaluation practices for selecting IT projects to be implemented can vary (Jiang & Klein, 1999). The objective of this paper is to analyse practices for selecting IT projects in Brazilian companies classified in different quadrants of the Strategic Grid and to observe any differences in ex-ante evaluation practices among them. The adopted methodological approach was qualitative research, more specifically case study (Claver, Gonzalez & Llopis, 2000; Yin, 1991) performed in four companies.


2003 ◽  
Vol 270 (1529) ◽  
pp. 2125-2132 ◽  
Author(s):  
William F. Hutchinson ◽  
Cock van Oosterhout ◽  
Stuart I. Rogers ◽  
Gary R. Carvalho

Sign in / Sign up

Export Citation Format

Share Document