Innovative Technologies for Controlled Fragmentation Warheads

2013 ◽  
Vol 80 (3) ◽  
Author(s):  
Domenico Villano ◽  
Francesco Galliccia

The purpose of this paper is to verify the applicability of innovative technologies for manufacturing controlled fragmentation warheads, with particular attention paid to guided ammunition. Several studies were conducted by the authors during the warhead development of DART and Vulcano family munitions. The lethality of the guided munitions can be considerably increased with controlled fragmentation warheads. This increase can compensate a lower payload of the guided munitions. After introducing the concept of warhead and its natural fragmentation, the paper describes both the elements of fracture mechanics related to the fragmentation and the state of the art of controlled fragmentation. A preliminary evaluation of controlled fragmentation technologies is illustrated along with the numerical models developed for predicting the natural and controlled fragmentations. The most promising technologies are presented in detail and the features of the warheads used for the experiments are defined. A description of the entire experimental phase is provided, including results of arena tests, data analysis and revision of numerical models. The applicability of some innovative technologies for controlled fragmentation warheads is fully demonstrated. Two technologies in particular, the laser microdrilling and the double casing solution, provide a high increase of the reference warhead lethality.

2017 ◽  
Vol 8 (3) ◽  
pp. 101-112 ◽  
Author(s):  
J Swain ◽  
P A Umesh ◽  
A S N Murty

Indian Space Research Organization had launched Oceansat-2 on 23 September 2009, and the scatterometer onboard was a space-borne sensor capable of providing ocean surface winds (both speed and direction) over the globe for a mission life of 5 years. The observations of ocean surface winds from such a space-borne sensor are the potential source of data covering the global oceans and useful for driving the state-of-the-art numerical models for simulating ocean state if assimilated/blended with weather prediction model products. In this study, an efficient interpolation technique of inverse distance and time is demonstrated using the Oceansat-2 wind measurements alone for a selected month of June 2010 to generate gridded outputs. As the data are available only along the satellite tracks and there are obvious data gaps due to various other reasons, Oceansat-2 winds were subjected to spatio-temporal interpolation, and 6-hour global wind fields for the global oceans were generated over 1 × 1 degree grid resolution. Such interpolated wind fields can be used to drive the state-of-the-art numerical models to predict/hindcast ocean-state so as to experiment and test the utility/performance of satellite measurements alone in the absence of blended fields. The technique can be tested for other satellites, which provide wind speed as well as direction data. However, the accuracy of input winds is obviously expected to have a perceptible influence on the predicted ocean-state parameters. Here, some attempts are also made to compare the interpolated Oceansat-2 winds with available buoy measurements and it was found that they are reasonably in good agreement with a correlation coefficient of R > 0.8 and mean deviation 1.04 m/s and 25° for wind speed and direction, respectively.


KWALON ◽  
2016 ◽  
Vol 21 (1) ◽  
Author(s):  
Susanne Friese

Summary The aim of this paper is to provide an overview of the ‘state of the art of QDA or CAQDAS software. The author uses Kahneman’s ideas about slow and fast thinking as a framework. Slow thinking in the context of CAQDAS is related to researcher driven analysis and fast thinking to tool- and data driven analysis. The paper is divided into two parts. In the first part, the author describes trends and new developments and in the second part, she offers a critical appraisal.


GigaScience ◽  
2020 ◽  
Vol 9 (11) ◽  
Author(s):  
Milton Silva ◽  
Diogo Pratas ◽  
Armando J Pinho

Abstract Background The increasing production of genomic data has led to an intensified need for models that can cope efficiently with the lossless compression of DNA sequences. Important applications include long-term storage and compression-based data analysis. In the literature, only a few recent articles propose the use of neural networks for DNA sequence compression. However, they fall short when compared with specific DNA compression tools, such as GeCo2. This limitation is due to the absence of models specifically designed for DNA sequences. In this work, we combine the power of neural networks with specific DNA models. For this purpose, we created GeCo3, a new genomic sequence compressor that uses neural networks for mixing multiple context and substitution-tolerant context models. Findings We benchmark GeCo3 as a reference-free DNA compressor in 5 datasets, including a balanced and comprehensive dataset of DNA sequences, the Y-chromosome and human mitogenome, 2 compilations of archaeal and virus genomes, 4 whole genomes, and 2 collections of FASTQ data of a human virome and ancient DNA. GeCo3 achieves a solid improvement in compression over the previous version (GeCo2) of $2.4\%$, $7.1\%$, $6.1\%$, $5.8\%$, and $6.0\%$, respectively. To test its performance as a reference-based DNA compressor, we benchmark GeCo3 in 4 datasets constituted by the pairwise compression of the chromosomes of the genomes of several primates. GeCo3 improves the compression in $12.4\%$, $11.7\%$, $10.8\%$, and $10.1\%$ over the state of the art. The cost of this compression improvement is some additional computational time (1.7–3 times slower than GeCo2). The RAM use is constant, and the tool scales efficiently, independently of the sequence size. Overall, these values outperform the state of the art. Conclusions GeCo3 is a genomic sequence compressor with a neural network mixing approach that provides additional gains over top specific genomic compressors. The proposed mixing method is portable, requiring only the probabilities of the models as inputs, providing easy adaptation to other data compressors or compression-based data analysis tools. GeCo3 is released under GPLv3 and is available for free download at https://github.com/cobilab/geco3.


2019 ◽  
Author(s):  
Hermano Lustosa ◽  
Fábio Porto ◽  
Patrick Valduriez

Limitations in current DBMSs prevent their wide adoption in scientific applications. In order to make scientific applications benefit from DBMS support, enabling declarative data analysis and visualization over scientific data, we present an in-memory array DBMS system called SAVIME. In this work we describe the system SAVIME, along with its data model. Our preliminary evaluation shows how SAVIME, by using a simple storage definition language (SDL) can outperform the state-of-the-art array database system, SciDB, during the process of data ingestion. We also show that is possible to use SAVIME as a storage alternative for a numerical solver without affecting its scalability.


Metabolites ◽  
2012 ◽  
Vol 2 (4) ◽  
pp. 733-755 ◽  
Author(s):  
Anne-Christin Hauschild ◽  
Till Schneider ◽  
Josch Pauling ◽  
Kathrin Rupp ◽  
Mi Jang ◽  
...  

2002 ◽  
Vol 12 ◽  
pp. 729-730
Author(s):  
Detlef Elstner

AbstractThe state of the art for dynamo models in spiral galaxies is reviewed. The comparison of numerical models with special properties of observed magnetic fields yields constraints for the turbulent diffusivity and the α-effect. The derivation of the turbulence parameters from the vertical structure of the interstellar medium gives quite reasonable values for modelling the regular magnetic fields in galaxies with an α2Ω-dynamo. Considering the differences of the turbulence between spiral arms and interarm regions, the observed interarm magnetic fields are recovered in the numerical models due to the special properties of the α2Ω-dynamo.


Sign in / Sign up

Export Citation Format

Share Document