A Macintosh Interface for the Cameca Camebax-Micro Electron Microprobe

Author(s):  
Carl E. Henderson

Over the past few years it has become apparent in our multi-user facility that the computer system and software supplied in 1985 with our CAMECA CAMEBAX-MICRO electron microprobe analyzer has the greatest potential for improvement and updating of any component of the instrument. While the standard CAMECA software running on a DEC PDP-11/23+ computer under the RSX-11M operating system can perform almost any task required of the instrument, the commands are not always intuitive and can be difficult to remember for the casual user (of which our laboratory has many). Given the widespread and growing use of other microcomputers (such as PC’s and Macintoshes) by users of the microprobe, the PDP has become the “oddball” and has also fallen behind the state-of-the-art in terms of processing speed and disk storage capabilities. Upgrade paths within products available from DEC are considered to be too expensive for the benefits received. After using a Macintosh for other tasks in the laboratory, such as instrument use and billing records, word processing, and graphics display, its unique and “friendly” user interface suggested an easier-to-use system for computer control of the electron microprobe automation. Specifically a Macintosh IIx was chosen for its capacity for third-party add-on cards used in instrument control.

Author(s):  
E. Völkl ◽  
L.F. Allard ◽  
T.A. Nolan ◽  
D. Hill ◽  
M. Lehmann

Due to the availability of fast computer networks such as Ethernet, FDDI and ATM, the idea of Telemicroscopy, including running electron microscopes from remote locations has gained momentum. Fan, Ellisman, Zaluzec and Parvin, have discussed aspects of systems which support such capabilities. In each of these reports the authors describe new stand-alone software packages that are required to run their systems. In order to make remote microscopy more universally available, we have chosen to expand de facto standard commercial software to provide for computerized microscope control and remote control.All of the major instruments in our user facility have been converted to digital operation and the darkroom has been abandoned completely. It is a logical extension of digital imaging to provide for computer control of the instrument operation, since the instrument parameters can be adjusted using feedback from analysis of the digital image. Digital image recording and display typically uses commercial software such as DigitalMicrograph© (which controls e.g. the CCD camera on the HF-2000). This program provides the four capabilities necessary to implement instrument control. It allows camera control,provides image processing tools, incorporates a scripting language and also allows C-code to be implemented.


Author(s):  
John T. Armstrong

One of the most cited papers in the geological sciences has been that of Albee and Bence on the use of empirical " α -factors" to correct quantitative electron microprobe data. During the past 25 years this method has remained the most commonly used correction for geological samples, despite the facts that few investigators have actually determined empirical α-factors, but instead employ tables of calculated α-factors using one of the conventional "ZAF" correction programs; a number of investigators have shown that the assumption that an α-factor is constant in binary systems where there are large matrix corrections is incorrect (e.g, 2-3); and the procedure’s desirability in terms of program size and computational speed is much less important today because of developments in computing capabilities. The question thus exists whether it is time to honorably retire the Bence-Albee procedure and turn to more modern, robust correction methods. This paper proposes that, although it is perhaps time to retire the original Bence-Albee procedure, it should be replaced by a similar method based on compositiondependent polynomial α-factor expressions.


Author(s):  
N. D. Evans ◽  
M. K. Kundmann

Post-column energy-filtered transmission electron microscopy (EFTEM) is inherently challenging as it requires the researcher to setup, align, and control both the microscope and the energy-filter. The software behind an EFTEM system is therefore critical to efficient, day-to-day application of this technique. This is particularly the case in a multiple-user environment such as at the Shared Research Equipment (SHaRE) User Facility at Oak Ridge National Laboratory. Here, visiting researchers, who may oe unfamiliar with the details of EFTEM, need to accomplish as much as possible in a relatively short period of time.We describe here our work in extending the base software of a commercially available EFTEM system in order to automate and streamline particular EFTEM tasks. The EFTEM system used is a Philips CM30 fitted with a Gatan Imaging Filter (GIF). The base software supplied with this system consists primarily of two Macintosh programs and a collection of add-ons (plug-ins) which provide instrument control, imaging, and data analysis facilities needed to perform EFTEM.


Author(s):  
John Mansfield

Advances in camera technology and digital instrument control have meant that in modern microscopy, the image that was, in the past, typically recorded on a piece of film is now recorded directly into a computer. The transfer of the analog image seen in the microscope to the digitized picture in the computer does not mean, however, that the problems associated with recording images, analyzing them, and preparing them for publication, have all miraculously been solved. The steps involved in the recording an image to film remain largely intact in the digital world. The image is recorded, prepared for measurement in some way, analyzed, and then prepared for presentation.Digital image acquisition schemes are largely the realm of the microscope manufacturers, however, there are also a multitude of “homemade” acquisition systems in microscope laboratories around the world. It is not the mission of this tutorial to deal with the various acquisition systems, but rather to introduce the novice user to rudimentary image processing and measurement.


1989 ◽  
Author(s):  
Larry Lewandowski ◽  
David A. Kobus ◽  
Malia M. Flood

Author(s):  
Charles Roddie

When interacting with others, it is often important for you to know what they have done in similar situations in the past: to know their reputation. One reason is that their past behavior may be a guide to their future behavior. A second reason is that their past behavior may have qualified them for reward and cooperation, or for punishment and revenge. The fact that you respond positively or negatively to the reputation of others then generates incentives for them to maintain good reputations. This article surveys the game theory literature which analyses the mechanisms and incentives involved in reputation. It also discusses how experiments have shed light on strategic behavior involved in maintaining reputations, and the adequacy of unreliable and third party information (gossip) for maintaining incentives for cooperation.


2021 ◽  
Vol 11 (9) ◽  
pp. 4232
Author(s):  
Krishan Harkhoe ◽  
Guy Verschaffelt ◽  
Guy Van der Sande

Delay-based reservoir computing (RC), a neuromorphic computing technique, has gathered lots of interest, as it promises compact and high-speed RC implementations. To further boost the computing speeds, we introduce and study an RC setup based on spin-VCSELs, thereby exploiting the high polarization modulation speed inherent to these lasers. Based on numerical simulations, we benchmarked this setup against state-of-the-art delay-based RC systems and its parameter space was analyzed for optimal performance. The high modulation speed enabled us to have more virtual nodes in a shorter time interval. However, we found that at these short time scales, the delay time and feedback rate heavily influence the nonlinear dynamics. Therefore, and contrary to other laser-based RC systems, the delay time has to be optimized in order to obtain good RC performances. We achieved state-of-the-art performances on a benchmark timeseries prediction task. This spin-VCSEL-based RC system shows a ten-fold improvement in processing speed, which can further be enhanced in a straightforward way by increasing the birefringence of the VCSEL chip.


Data ◽  
2021 ◽  
Vol 6 (8) ◽  
pp. 87
Author(s):  
Sara Ferreira ◽  
Mário Antunes ◽  
Manuel E. Correia

Deepfake and manipulated digital photos and videos are being increasingly used in a myriad of cybercrimes. Ransomware, the dissemination of fake news, and digital kidnapping-related crimes are the most recurrent, in which tampered multimedia content has been the primordial disseminating vehicle. Digital forensic analysis tools are being widely used by criminal investigations to automate the identification of digital evidence in seized electronic equipment. The number of files to be processed and the complexity of the crimes under analysis have highlighted the need to employ efficient digital forensics techniques grounded on state-of-the-art technologies. Machine Learning (ML) researchers have been challenged to apply techniques and methods to improve the automatic detection of manipulated multimedia content. However, the implementation of such methods have not yet been massively incorporated into digital forensic tools, mostly due to the lack of realistic and well-structured datasets of photos and videos. The diversity and richness of the datasets are crucial to benchmark the ML models and to evaluate their appropriateness to be applied in real-world digital forensics applications. An example is the development of third-party modules for the widely used Autopsy digital forensic application. This paper presents a dataset obtained by extracting a set of simple features from genuine and manipulated photos and videos, which are part of state-of-the-art existing datasets. The resulting dataset is balanced, and each entry comprises a label and a vector of numeric values corresponding to the features extracted through a Discrete Fourier Transform (DFT). The dataset is available in a GitHub repository, and the total amount of photos and video frames is 40,588 and 12,400, respectively. The dataset was validated and benchmarked with deep learning Convolutional Neural Networks (CNN) and Support Vector Machines (SVM) methods; however, a plethora of other existing ones can be applied. Generically, the results show a better F1-score for CNN when comparing with SVM, both for photos and videos processing. CNN achieved an F1-score of 0.9968 and 0.8415 for photos and videos, respectively. Regarding SVM, the results obtained with 5-fold cross-validation are 0.9953 and 0.7955, respectively, for photos and videos processing. A set of methods written in Python is available for the researchers, namely to preprocess and extract the features from the original photos and videos files and to build the training and testing sets. Additional methods are also available to convert the original PKL files into CSV and TXT, which gives more flexibility for the ML researchers to use the dataset on existing ML frameworks and tools.


2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Xudong Zhu ◽  
Zhiyang Chen ◽  
Weiyan Shen ◽  
Gang Huang ◽  
John M. Sedivy ◽  
...  

AbstractRemarkable progress in ageing research has been achieved over the past decades. General perceptions and experimental evidence pinpoint that the decline of physical function often initiates by cell senescence and organ ageing. Epigenetic dynamics and immunometabolic reprogramming link to the alterations of cellular response to intrinsic and extrinsic stimuli, representing current hotspots as they not only (re-)shape the individual cell identity, but also involve in cell fate decision. This review focuses on the present findings and emerging concepts in epigenetic, inflammatory, and metabolic regulations and the consequences of the ageing process. Potential therapeutic interventions targeting cell senescence and regulatory mechanisms, using state-of-the-art techniques are also discussed.


2021 ◽  
Vol 26 (4) ◽  
Author(s):  
Mazen Mohamad ◽  
Jan-Philipp Steghöfer ◽  
Riccardo Scandariato

AbstractSecurity Assurance Cases (SAC) are a form of structured argumentation used to reason about the security properties of a system. After the successful adoption of assurance cases for safety, SAC are getting significant traction in recent years, especially in safety-critical industries (e.g., automotive), where there is an increasing pressure to be compliant with several security standards and regulations. Accordingly, research in the field of SAC has flourished in the past decade, with different approaches being investigated. In an effort to systematize this active field of research, we conducted a systematic literature review (SLR) of the existing academic studies on SAC. Our review resulted in an in-depth analysis and comparison of 51 papers. Our results indicate that, while there are numerous papers discussing the importance of SAC and their usage scenarios, the literature is still immature with respect to concrete support for practitioners on how to build and maintain a SAC. More importantly, even though some methodologies are available, their validation and tool support is still lacking.


Sign in / Sign up

Export Citation Format

Share Document