The application of geophysics during evaluation of the Century zinc deposit

Geophysics ◽  
2000 ◽  
Vol 65 (6) ◽  
pp. 1946-1960 ◽  
Author(s):  
Andrew J. Mutton

During the period 1990 to 1995, experimental programs using high‐resolution geophysics at several Australian operating mines and advanced evaluation projects were undertaken. The primary aim of those programs was to investigate the application of geophysical technology to improving the precision and economics of the ore evaluation and extraction processes. Geophysical methods used for this purpose include: 1) borehole geophysical logging to characterize ore and rock properties more accurately for improved correlations between drill holes, quantification of resource quality, and geotechnical information. 2) imaging techniques between drill holes to map structure directly or to locate geotechnical problems ahead of mining. 3) high‐resolution surface methods to map ore contacts and variations in ore quality, or for geotechnical requirements. In particular, the use of geophysics during evaluation of the Century zinc deposit in northern Australia demonstrated the potential value of these methods to the problems of defining the lateral and vertical extent of ore, quantitative density determination, prediction of structure between drill holes, and geotechnical characterization of the deposit. An analysis of the potential benefit of using a combination of borehole geophysical logging and imaging suggested that a more precise structural evaluation of the deposit could be achieved at a cost of several million dollars less than the conventional evaluation approach based on analysis from diamond drill‐hole logging and interpolation alone. The use of geophysics for the Century evaluation also provided substance to the possibility of using systematic geophysical logging of blast holes as an integral part of the ore extraction process. Preliminary tests indicate that ore boundaries can be determined to a resolution of several centimeters, and ore grade can be estimated directly to a usable accuracy. Applying this approach routinely to production blast holes would yield potential benefits of millions of dollars annually through improved timeliness and accuracy of ore boundary and quality data, decreased dilution, and improved mill performance. Although the indications of substantial benefits resulting from the appropriate and timely use of geophysics at Rio Tinto’s mining operations are positive, some challenges remain. These relate largely to the appropriate integration of the technology with the mining process, and acceptance by the mine operators of the economic value of such work. Until the benefits are demonstrated clearly over time, the use of geophysics as a routine component of evaluation and mining is likely to remain at a low level.

2020 ◽  
Author(s):  
Rebecca Bell

<p>The discovery of slow slip events (SSEs) at subduction margins in the last two decades has changed our understanding of how stress is released at subduction zones. Fault slip is now viewed as a continuum of different slip modes between regular earthquakes and aseismic creep, and an appreciation of seismic hazard can only be realised by understanding the full spectrum of slip. SSEs may have the potential to trigger destructive earthquakes and tsunami on faults nearby, but whether this is possible and why SSEs occur at all are two of the most important questions in earthquake seismology today. Laboratory and numerical models suggest that slow slip can be spontaneously generated under conditions of very low effective stresses, facilitated by high pore fluid pressure, but it has also been suggested that variations in frictional behaviour, potentially caused by very heterogeneous fault zone lithology, may be required to promote slow slip.</p><p>Testing these hypotheses is difficult as it requires resolving rock properties at a high resolution many km below the seabed sometimes in km’s of water, where drilling is technically challenging and expensive. Traditional geophysical methods like travel-time tomography cannot provide fine-scale enough velocity models to probe the rock properties in fault zones specifically. In the last decade, however, computational power has improved to the point where 3D full-waveform inversion (FWI) methods make it possible to use the full wavefield rather than just travel times to produce seismic velocity models with a resolution an order of magnitude better than conventional models. Although the hydrocarbon industry have demonstrated many successful examples of 3D FWI the method requires extremely high density arrays of instruments, very different to the 2D transect data collection style which is still commonly employed at subduction zones.</p><p> The north Hikurangi subduction zone, New Zealand is special, as it hosts the world’s most well characterised shallow SSEs (<2 km to 15 km below the seabed).  This makes it an ideal location to collect 3D data optimally for FWI to resolve rock properties in the slow slip zone. In 2017-2018 an unprecedentedly large 3D experiment including 3D multi-channel seismic reflection, 99 ocean bottom seismometers and 194 onshore seismometers was conducted along the north Hikurangi margin in an 100 km x 15 km area, with an average 2 km instrument spacing. In addition, IODP Expeditions 372 and 375 collected logging-while drilling and core data, and deployed two bore-hole observatories to target slow slip in the same area. In this presentation I will introduce you to this world class 3D dataset and preliminary results, which will enable high resolution 3D models of physical properties to be made to bring slow slip processes into focus.  </p>


Author(s):  
James Pawley ◽  
David Joy

The scanning electron microscope (SEM) builds up an image by sampling contiguous sub-volumes near the surface of the specimen. A fine electron beam selectively excites each sub-volume and then the intensity of some resulting signal is measured and then plotted as a corresponding intensity in an image. The spatial resolution of such an image is limited by at least three factors. Two of these determine the size of the interaction volume: the size of the electron probe and the extent to which detectable signal is excited from locations remote from the beam impact area. A third limitation emerges from the fact that the probing beam is composed of a number of discrete particles and therefore that the accuracy with which any detectable signal can be measured is limited by Poisson statistics applied to this number (or to the number of events actually detected if this is smaller). As in all imaging techniques, the limiting signal contrast required to recognize a morphological structure is constrained by this statistical consideration. The only way to overcome this limit is to increase either the contrast of the measured signal or the number of beam/specimen interactions detected. Unfortunately, these interactions deposit ionizing radiation that may damage the very structure under investigation. As a result, any practical consideration of the high resolution performance of the SEM must consider not only the size of the interaction volume but also the contrast available from the signal producing the image and the radiation sensitivity of the specimen.


Author(s):  
C. Barry Carter

This paper will review the current state of understanding of interface structure and highlight some of the future needs and problems which must be overcome. The study of this subject can be separated into three different topics: 1) the fundamental electron microscopy aspects, 2) material-specific features of the study and 3) the characteristics of the particular interfaces. The two topics which are relevant to most studies are the choice of imaging techniques and sample preparation. The techniques used to study interfaces in the TEM include high-resolution imaging, conventional diffraction-contrast imaging, and phase-contrast imaging (Fresnel fringe images, diffuse scattering). The material studied affects not only the characteristics of the interfaces (through changes in bonding, etc.) but also the method used for sample preparation which may in turn have a significant affect on the resulting image. Finally, the actual nature and geometry of the interface must be considered. For example, it has become increasingly clear that the plane of the interface is particularly important whenever at least one of the adjoining grains is crystalline.A particularly productive approach to the study of interfaces is to combine different imaging techniques as illustrated in the study of grain boundaries in alumina. In this case, the conventional imaging approach showed that most grain boundaries in ion-thinned samples are grooved at the grain boundary although the extent of this grooving clearly depends on the crystallography of the surface. The use of diffuse scattering (from amorphous regions) gives invaluable information here since it can be used to confirm directly that surface grooving does occur and that the grooves can fill with amorphous material during sample preparation (see Fig. 1). Extensive use of image simulation has shown that, although information concerning the interface can be obtained from Fresnel-fringe images, the introduction of artifacts through sample preparation cannot be lightly ignored. The Fresnel-fringe simulation has been carried out using a commercial multislice program (TEMPAS) which was intended for simulation of high-resolution images.


1992 ◽  
Vol 23 (4) ◽  
pp. 245-256 ◽  
Author(s):  
Å. Spångberg ◽  
J. Niemczynowicz

The paper describes a measurement project aiming at delivering water quality data with the very fine time resolution necessary to discover deterministic elements of the complex process of pollution wash-off from an urban surface. Measurements of rainfall, runoff, turbidity, pH, conductivity and temperature with 10 sec time resolution were performed on a simple urban catchment, i.e. a single impermeable 270 m2 surface drained by one inlet. The paper presents data collection and some preliminary results.


2021 ◽  
Vol 3 (Supplement_1) ◽  
pp. i1-i1
Author(s):  
Gilbert Hangel ◽  
Cornelius Cadrien ◽  
Philipp Lazen ◽  
Sukrit Sharma ◽  
Julia Furtner ◽  
...  

Abstract OBJECTIVES Neurosurgical resection in gliomas depends on the precise preoperative definition of the tumor and its margins to realize a safe maximum resection that translates into a better patient outcome. New metabolic imaging techniques could improve this delineation as well as designate targets for biopsies. We validated the performance of our fast high-resolution whole-brain 3D-magnetic resonance spectroscopic imaging (MRSI) method at 7T in high-grade gliomas (HGGs) as first step to this regard. METHODS We measured 23 patients with HGGs at 7T with MRSI covering the whole cerebrum with 3.4mm isotropic resolution in 15 min. Quantification used a basis-set of 17 neurochemical components. They were evaluated for their reliability/quality and compared to neuroradiologically segmented tumor regions-of-interest (necrosis, contrast-enhanced, non-contrast-enhanced+edema, peritumoral) and histopathology (e.g., grade, IDH-status). RESULTS We found 18/23 measurements to be usable and ten neurochemicals quantified with acceptable quality. The most common denominators were increases of glutamine, glycine, and total choline as well as decreases of N-acetyl-aspartate and total creatine over most tumor regions. Other metabolites like taurine and serine showed mixed behavior. We further found that heterogeneity in the metabolic images often continued into the peritumoral region. While 2-hydroxy-glutarate could not be satisfyingly quantified, we found a tendency for a decrease of glutamate in IDH1-mutant HGGs. DISCUSSION Our findings corresponded well to clinical tumor segmentation but were more heterogeneous and often extended into the peritumoral region. Our results corresponded to previous knowledge, but with previously not feasible resolution. Apart from glycine/glutamine and their role in glioma progression, more research on the connection of glutamate and others to specific mutations is necessary. The addition of low-grade gliomas and statistical ROI analysis in a larger cohort will be the next important steps to define the benefits of our 7T MRSI approach for the definition of spatial metabolic tumor profiles.


Geophysics ◽  
2001 ◽  
Vol 66 (1) ◽  
pp. 78-89 ◽  
Author(s):  
Donat Demanet ◽  
François Renardy ◽  
Kris Vanneste ◽  
Denis Jongmans ◽  
Thierry Camelbeeck ◽  
...  

As part of a paleoseismological investigation along the Bree fault scarp (western border of the Roer Graben), various geophysical methods [electrical profiling, electromagnetic (EM) profiling, refraction seismic tests, electrical tomography, ground‐penetrating radar (GPR), and high‐resolution reflection seismic profiles] were used to locate and image an active fault zone in a depth range between a few decimeters to a few tens of meters. These geophysical investigations, in parallel with geomorphological and geological analyses, helped in the decision to locate trench excavations exposing the fault surfaces. The results could then be checked with the observations in four trenches excavated across the scarp. Geophysical methods pointed out anomalies at all sites of the fault position. The contrast of physical properties (electrical resistivity and permittivity, seismic velocity) observed between the two fault blocks is a result of a differences in the lithology of the juxtaposed soil layers and of a change in the water table depth across the fault. Extremely fast techniques like electrical and EM profiling or seismic refraction profiles localized the fault position within an accuracy of a few meters. In a second step, more detailed methods (electrical tomography and GPR) more precisely imaged the fault zone and revealed some structures that were observed in the trenches. Finally, one high‐resolution reflection seismic profile imaged the displacement of the fault at depths as large as 120 m and filled the gap between classical seismic reflection profiles and the shallow geophysical techniques. Like all geophysical surveys, the quality of the data is strongly dependent on the geologic environment and on the contrast of the physical properties between the juxtaposed formations. The combined use of various geophysical techniques is thus recommended for fault mapping, particularly for a preliminary investigation when the geological context is poorly defined.


2004 ◽  
Vol 18 (2) ◽  
pp. 80-87 ◽  
Author(s):  
Archie Heddings ◽  
Mehmet Bilgen ◽  
Randolph Nudo ◽  
Bruce Toby ◽  
Terence McIff ◽  
...  

Objectives. It is widely accepted that peripheral nerve repairs performed within 6 weeks of injury have much better outcomes than those performed at later dates. However, there is no diagnostic technique that can determine if a traumatic peripheral nerve injury requires surgical intervention in the early postinjury phase. The objective of this article was to determine whether novel, noninvasive magnetic resonance imaging techniques could demonstrate the microstructure of human peripheral nerves that is necessary for determining prognosis and determining if surgery is indicated following traumatic injury. Methods. Ex vivo magnetic resonance imaging protocols were developed on a 9.4-T research scanner using spin-echo proton density and gradient-echo imaging sequences and a specially designed, inductively coupled radio frequency coil. These imaging protocols were applied to in situ imaging of the human median nerve in 4 fresh-frozen cadaver arms. Results. Noninvasive high-resolution images of the human median nerve were obtained. Structures in the nerve that were observed included fascicles, interfascicular epineurium, perineurium, and intrafascicular septations. Conclusion. Application of these imaging techniques to clinical scanners could provide physicians with a tool that is capable of grading the severity of nerve injuries and providing indications for surgery in the early postinjury phase.


2021 ◽  
Vol 36 (Supplement_1) ◽  
Author(s):  
Juliane Hermann ◽  
Ute Raffetseder ◽  
Michaela Lellig ◽  
Joachim Jankowski ◽  
Vera Jankowski

Abstract Background and Aims With continuous identification of post-translational modified isoforms of proteins, it is becoming increasingly clear that post-translational modifications limit or modify the biological functions of native proteins are majorly involved in development of various chronic disease. This is mostly due to technically advanced molecular identification and quantification methods, mainly based on mass spectrometry. Mass spectrometry has become one of the most powerful tools for the identification of lipids. Method In this study, we used sophisticated high-resolution mass-spectrometric methods to analyze the soluble ligand of receptor Notch-3, namely the Y-box protein (YB)-1, in serum from systemic lupus erythematosus (SLE) patients. In addition, kidneys of lupus-prone (MRL.lpr) mice were analyzed by mass-spectrometric imaging techniques to identify the underlying pathomechanisms. Serum YB-1 was isolated by chromatographic methods, afterwards digested by trypsin and analyzed by matrix assisted laser desorption/ionization mass spectrometry (MALDI-MS). The kidneys were fixed in paraffin, then kidney sections were deparaffinized, tryptic digested and analyzed by mass-spectrometric imaging techniques. Mass-spectrometry of extracellular YB-1 in SLE patient serum revealed post-translational guanidinylation of two lysine’s within the highly conserved cold shock domain (CSD) of the YB-1 protein (YB-1-2G). Patients with increased disease activity and those with active renal involvement (lupus nephritis, LN) had a higher degree of dual-guanidinylation within the CSD. Of note, at least one of these modifications was present in all analyzed LN patients, whereas single-guanidinylated YB-1 was present in only one and double modification in none of the control individuals. Mass-spectrometric imaging analyses specifically localized YB-1-2G and increases Notch-3 expression in kidney sections from MRL.lpr mice. Results The data from this study clearly demonstrate the high potential of high-resolution mass spectrometric methods as well as mass spectrometric imaging techniques to identify pathomechanisms of diseases like SLE/LN.


2015 ◽  
Vol 68 (2) ◽  
pp. 221-227 ◽  
Author(s):  
Cristina Paixão Araújo ◽  
João Felipe Coimbra Leite Costa

AbstractDecisions, from mineral exploration to mining operations, are based on grade block models obtained from samples. This study evaluates the impact of using imprecise data in short-term planning. The exhaustive Walker Lake dataset is used and is considered as the source for obtaining the true grades. Initially, samples are obtained from the exhaustive dataset at regularly spaced grids of 20 × 20 m and 5 × 5 m. A relative error (imprecision) of ±25% and a 10% bias are added to the data spaced at 5 × 5 m (short-term geological data) in different scenarios. To combine these different types of data, two methodologies are investigated: cokriging and ordinary kriging. Both types of data are used to estimate blocks with the two methodologies. The grade tonnage curves and swath plots are used to compare the results against the true block grade distribution. In addition, the block misclassification is evaluated. The results show that standardized ordinary cokriging is a better methodology for imprecise and biased data and produces estimates closer to the true grade block distribution, reducing block misclassification.


Sign in / Sign up

Export Citation Format

Share Document