scholarly journals Glossary of terms used in photocatalysis and radiation catalysis (IUPAC Recommendations 2011)

2011 ◽  
Vol 83 (4) ◽  
pp. 931-1014 ◽  
Author(s):  
Silvia E. Braslavsky ◽  
André M. Braun ◽  
Alberto E. Cassano ◽  
Alexei V. Emeline ◽  
Marta I. Litter ◽  
...  

This glossary of terms covers phenomena considered under the very wide terms photocatalysis and radiation catalysis. A clear distinction is made between phenomena related to either photochemistry and photocatalysis or radiation chemistry and radiation catalysis. The term “radiation” is used here as embracing electromagnetic radiation of all wavelengths, but in general excluding fast-moving particles. Consistent definitions are given of terms in the areas mentioned above, as well as definitions of the most important parameters used for the quantitative description of the phenomena. Terms related to the up-scaling of photocatalytic processes for industrial applications have been included. This Glossary should be used together with the Glossary of terms used in photochemistry, 3rd edition, IUPAC Recommendations 2006: (doi:10.1351/pac200779030293) as well as with the IUPAC Compendium of Chemical Terminology, 2nd ed. (the “Gold Book”, 2006– doi:10.1351/goldbook) because many terms used in photocatalysis are defined in these documents.

The Microwave oven is a system used to convert the electromagnetic energy to thermal energy when the microwave cavity is loaded with a dielectric material. The ordinary microwave ovens are not supported with complex features for detecting parameters such as temperature, weight, and loaded material availability. Due to the lack of material availability, several laboratory and industrial applications require these features to switch off the oven. The reflections of electromagnetic radiation inside an empty microwave oven lead to oven damage. An overview of the microwave oven characteristics and emergence of electromagnetic radiation inside a microwave oven is presented in this study. The parameters measured inside the microwave oven, methods for power attenuation in a microwave oven, microwave power detector, and microwave oven leakage are discussed as well. Moreover in the methodology of this work, proposed a new technique based on the measurement of leaked microwave power to control the microwave oven. The preliminary results showed that the leakage measurement of electromagnetic power changes with the state/phase of the material inside the microwave oven, which ensured the possibility of the proposed promising technique. This work will be continued to connect the microwave oven with a spectrum analyzer and computer via hardware and software interfaces depending on the methodology of this article. A computer code will be developed to read the measured power and automatically switch off the microwave oven depending on materials state. Index Terms— Microwave, power, measurement, control.


1982 ◽  
Vol 55 (3) ◽  
pp. 575-668 ◽  
Author(s):  
G. G. A. Böhm ◽  
J. O. Tveekrem

Materials ◽  
2021 ◽  
Vol 14 (17) ◽  
pp. 4996
Author(s):  
Anna Ostaszewska-Liżewska ◽  
Michał Nowicki ◽  
Roman Szewczyk ◽  
Mika Malinen

This paper presents a novel finite element method (FEM) of optimization for driving frequency in magneto-mechanical systems using contactless magnetoelastic torque sensors. The optimization technique is based on the generalization of the axial and shear stress dependence of the magnetic permeability tensor. This generalization creates a new possibility for the determination of the torque dependence of a permeability tensor based on measurements of the axial stress on the magnetization curve. Such a possibility of quantitative description of torque dependence of a magnetic permeability tensor has never before been presented. Results from the FEM-based modeling method were validated against a real magnetoelastic torque sensor. The sensitivity characteristics of the model and the real sensor show a maximum using a driving current of similar frequency. Consequently, the proposed method demonstrates the novel possibility of optimizing magnetoelastic sensors for automotive and industrial applications.


Author(s):  
Sergey Gladkov

<p>The problem of finding out the connection between the flight speed of a metal object and the length of electromagnetic radiation that accompanies its movement has been solved. The calculations are based on the application of Lienar-Vihert's potential theory. Based on the assumption of the dipole mechanism of radiation, the intensity of electromagnetic radiation was calculated and its distribution by coordinates was found. </p>


Author(s):  
C. F. Oster

Although ultra-thin sectioning techniques are widely used in the biological sciences, their applications are somewhat less popular but very useful in industrial applications. This presentation will review several specific applications where ultra-thin sectioning techniques have proven invaluable.The preparation of samples for sectioning usually involves embedding in an epoxy resin. Araldite 6005 Resin and Hardener are mixed so that the hardness of the embedding medium matches that of the sample to reduce any distortion of the sample during the sectioning process. No dehydration series are needed to prepare our usual samples for embedding, but some types require hardening and staining steps. The embedded samples are sectioned with either a prototype of a Porter-Blum Microtome or an LKB Ultrotome III. Both instruments are equipped with diamond knives.In the study of photographic film, the distribution of the developed silver particles through the layer is important to the image tone and/or scattering power. Also, the morphology of the developed silver is an important factor, and cross sections will show this structure.


Author(s):  
W.M. Stobbs

I do not have access to the abstracts of the first meeting of EMSA but at this, the 50th Anniversary meeting of the Electron Microscopy Society of America, I have an excuse to consider the historical origins of the approaches we take to the use of electron microscopy for the characterisation of materials. I have myself been actively involved in the use of TEM for the characterisation of heterogeneities for little more than half of that period. My own view is that it was between the 3rd International Meeting at London, and the 1956 Stockholm meeting, the first of the European series , that the foundations of the approaches we now take to the characterisation of a material using the TEM were laid down. (This was 10 years before I took dynamical theory to be etched in stone.) It was at the 1956 meeting that Menter showed lattice resolution images of sodium faujasite and Hirsch, Home and Whelan showed images of dislocations in the XlVth session on “metallography and other industrial applications”. I have always incidentally been delighted by the way the latter authors misinterpreted astonishingly clear thickness fringes in a beaten (”) foil of Al as being contrast due to “large strains”, an error which they corrected with admirable rapidity as the theory developed. At the London meeting the research described covered a broad range of approaches, including many that are only now being rediscovered as worth further effort: however such is the power of “the image” to persuade that the above two papers set trends which influence, perhaps too strongly, the approaches we take now. Menter was clear that the way the planes in his image tended to be curved was associated with the imaging conditions rather than with lattice strains, and yet it now seems to be common practice to assume that the dots in an “atomic resolution image” can faithfully represent the variations in atomic spacing at a localised defect. Even when the more reasonable approach is taken of matching the image details with a computed simulation for an assumed model, the non-uniqueness of the interpreted fit seems to be rather rarely appreciated. Hirsch et al., on the other hand, made a point of using their images to get numerical data on characteristics of the specimen they examined, such as its dislocation density, which would not be expected to be influenced by uncertainties in the contrast. Nonetheless the trends were set with microscope manufacturers producing higher and higher resolution microscopes, while the blind faith of the users in the image produced as being a near directly interpretable representation of reality seems to have increased rather than been generally questioned. But if we want to test structural models we need numbers and it is the analogue to digital conversion of the information in the image which is required.


Author(s):  
N. V. Larcher ◽  
I. G. Solorzano

It is currently well established that, for an Al-Ag alloy quenched from the α phase and aged within the metastable solvus, the aging sequence is: supersaturated α → GP zones → γ’ → γ (Ag2Al). While GP zones and plate-shaped γ’ are metastable phases, continuously distributed in the matrix, formation of the equilibrium phase γ takes place at grain boundaries by discontinuous precipitation (DP). The crystal structure of both γ’ and γ is hep with the following orientation relationship with respect to the fee α matrix: {0001}γ′,γ // {111}α, <1120>γ′,γ, // <110>α.The mechanisms and kinetics of continuous matrix precipitation (CMP) in dilute Al-Ag alloys have been studied in considerable detail. The quantitative description of DP kinetics, however, has received less attention. The present contribution reports the microstructural evolution resulting from aging an Al-Ag alloy with Ag content higher than those previously reported in the literature, focusing the observations of γ' plate-shaped metastable precipitates.


Author(s):  
C J R Sheppard

The confocal microscope is now widely used in both biomedical and industrial applications for imaging, in three dimensions, objects with appreciable depth. There are now a range of different microscopes on the market, which have adopted a variety of different designs. The aim of this paper is to explore the effects on imaging performance of design parameters including the method of scanning, the type of detector, and the size and shape of the confocal aperture.It is becoming apparent that there is no such thing as an ideal confocal microscope: all systems have limitations and the best compromise depends on what the microscope is used for and how it is used. The most important compromise at present is between image quality and speed of scanning, which is particularly apparent when imaging with very weak signals. If great speed is not of importance, then the fundamental limitation for fluorescence imaging is the detection of sufficient numbers of photons before the fluorochrome bleaches.


Sign in / Sign up

Export Citation Format

Share Document