Determination of worst case measurement distances for class 1M and class 2M

Author(s):  
Enrico Galbiati
Keyword(s):  
2021 ◽  
Author(s):  
Bartley Eckhardt ◽  
Daniel Fridline ◽  
Richard Burke

Ocean towing in general, and non-routine tows in particular, present unique technical challenges to towing vessel owners/operators, salvors, the offshore oil/gas and wind industries, and others. When such tows “go wrong”, the harm to human life, property and/or the environment can be significant. The authors have drawn from their work on the Towing Safety Advisory Committee’s investigation of the grounding of the MODU Kulluk to present methods and considerations in analyzing ocean towing evolutions, both “routine” and “non-routine”. (TASK 14-01) The methods and considerations presented should be employed in advance of a towing evolution, but can be used in accident reconstruction and forensic analysis when an evolution has failed. The methods presented are iterative, and consider 2 x 6 degree freedom of motion (of the towing vessel(s) and towed vessel respectively) and characteristics of the towline, and facilitate determination of: Worst Case Conditions. Extreme Towline Tension (ETT) as a function of sea state and speed. Limits of the Tow (Go-No Go Criteria). Recommended Catenary Length as a function of sea state and speed. Size and Selection of the Towing Vessel and Gear, including: Required Bollard Pull. Required Strength, Characteristics and Condition of the Towline. Limits and Set Points of the Towing Winch, Automatic or Manual. Required Strength and Characteristics of the Synthetic Emergency Towline and its methods of deployment and connection. Working Load Limit (WLL) of the Shackles, Delta Plate and Attachment Points. Required Strength and Characteristics of Bridles, Pendant and Surge Gear/Shock Lines. The authors further explore the implications of single point failure modes, redundancy in gear and towing vessel(s), high cycle fatigue, and strain monitoring.


Author(s):  
David F. Thurston

The main objective in optimizing train control is to eliminate the waist associated with classical design where train separation is determined through the use of “worst case” assumptions that are invariant to the system. In fact, the worst case approach has been in place since the beginning of train control systems. Worst case takes the most conservative approach to the determination of train stopping distance, which is the basis for design of virtually all train control. This leads to stopping distances that could be far more that actually required under the circumstances at the time the train is attempting to brake. Modern train control systems are designed to separate trains in order to provide safety of operation while increasing throughput. Calculations for the minimum distance that separates trains have traditionally been based on the sum of a series of worst case scenarios. The implication was that no train could ever exceed this distance in stopping. This distance is called Safe Braking Distance (SBD). SBD has always been calculated by static parameters that were assumed to be invariant. This is, however, not the case. Parameters such as adhesion, acceleration, weight, and reaction vary over time, location or velocity. Since the worst case is always used in the calculation, inefficiencies result in this methodology which causes degradation in capacity and throughput. This is also true when mixed traffic with different stopping characteristics are present at the same time. The classic theory in train control utilizes a SBD model to describe the characteristics of a stopping train. Since knowledge of these conditions is not known, poor conditions are assumed. A new concept in train control utilizes statistical analysis and estimation to provide knowledge of the conditions. Trains operating along the line utilize these techniques to understand inputs into their SBD calculation. This provides for a SBD calculation on board the train that is the shortest possible that maintains the required level of safety. The new SBD is a prime determinant in systems capacity. Therefore by optimizing SBD as describes, system capacity is also optimized. The system continuously adjusts to changing conditions.


2017 ◽  
Vol 54 (3) ◽  
pp. 1205-1210 ◽  
Author(s):  
Andreas Knoblach ◽  
Gertjan Looye
Keyword(s):  

1987 ◽  
Vol 41 (8) ◽  
pp. 1324-1329 ◽  
Author(s):  
Charles K. Mann ◽  
Thomas J. Vickers ◽  
James D. Womack

The problems encountered in applying Raman spectroscopy to direct qualitative and quantitative analysis for minor impurities in nominally pure, colorless solids have been examined. Samples of sulfamethoxazole spiked with 0.5 to 5% of sulfanilamide and sulfanilic acid were used as test materials. A procedure is described which permits detection of spectral features of the specified impurities at the 0.5% level. Least-squares fitting and cross-correlation data treatment procedures for the determination of sulfanilamide in sulfamethoxazole, with limits of detection of about 0.1% for either approach, are described. Computer simulations have been used to examine detection of impurity peaks for a variety of conditions, including the worst-case scenario in which the impurity features coincide with the strongest features of the spectrum of the host material. A least-squares fitting approach is described which permits detection of the impurity peak at the 0.5% level, even under worst case conditions.


Sign in / Sign up

Export Citation Format

Share Document