scholarly journals Self-referenced method for the Judd–Ofelt parametrisation of the Eu3+ excitation spectrum

2022 ◽  
Vol 12 (1) ◽  
Author(s):  
Aleksandar Ćirić ◽  
Łukasz Marciniak ◽  
Miroslav D. Dramićanin

AbstractJudd–Ofelt theory is a cornerstone of lanthanides’ spectroscopy given that it describes 4fn emissions and absorptions of lanthanide ions using only three intensity parameters. A self-referenced technique for computing Judd–Ofelt intensity parameters from the excitation spectra of Eu3+-activated luminescent materials is presented in this study along with an explanation of the parametrisation procedure and free user-friendly web application. It uses the integrated intensities of the 7F0 → 5D2, 7F0 → 5D4, and 7F0 → 5L6 transitions in the excitation spectrum for estimation and the integrated intensity of the 7F0 → 5D1 magnetic dipole transition for calibration. This approach facilitates an effortless derivation of the Ω6 intensity parameter, which is challenging to compute precisely by Krupke’s parametrisation of the emission spectrum and, therefore, often omitted in published research papers. Compared to the parametrisation of absorption spectra, the described method is more accurate, can be applied to any material form, and requires a single excitation spectrum.

2021 ◽  
Author(s):  
Aleksandar Ciric ◽  
Lukasz Marciniak ◽  
Miroslav Dramicanin

Abstract Judd-Ofelt theory presents a centrepiece in spectroscopy of lanthanides since it explains and predicts 4f absorptions and emissions from only 3 intensity parameters. A self-referenced method for calculating Judd–Ofelt intensity parameters from the excitation spectra of Eu3+-activated luminescent materials is proposed in this study along with a description of the parametrisation procedure and free user-friendly web application. It uses the integrated intensities of the 7F0→5D2, 7F0→5D4, and 7F0→5L6 transitions in the excitation spectrum for calculations and the integrated intensity of the 7F0→5D1 magnetic dipole transition for calibration. This approach allows a simple derivation of the Ω6 intensity parameter, which is difficult to calculate precisely by Krupke’s parametrisation of the emission spectrum and, therefore, frequently omitted in published research papers. Compared to the parametrisation of absorption spectra, the described method is more accurate, can be applied to any material form, and requires a single excitation spectrum.


2022 ◽  
Vol 130 (1) ◽  
pp. 207
Author(s):  
Lucca Blois ◽  
Albano N. Carneiro Neto ◽  
Ricardo L. Longo ◽  
Israel F. Costa ◽  
Tiago B. Paolini ◽  
...  

Eu3+ complexes and specially β-diketonate compounds are well known and studied in several areas due to their luminescence properties, such as sensors and lightning devices. A unique feature of the Eu3+ ion is the experimental determination of the 4f-4f intensity parameters Ωλ directly from the emission spectrum. The equations for determining Ωλ from the emission spectra are different for the detection of emitted power compared to modern equipment that detects photons per second. It is shown that the differences between Ωλ determined by misusing the equations are sizable for Ω4 (ca. 15.5%) for several Eu3+β-diketonate complexes and leads to differences of ca. 5% in the intrinsic quantum yields Q_Ln^Ln. Due to the unique features of trivalent lanthanide ions, such as the shielding of 4f-electrons, which lead to small covalency and crystal field effects, a linear correlation was observed between Ωλ obtained using the emitted power and photon counting equations. We stress that care should be exercised with the type of detection should be taken and provide the correction factors for the intensity parameters. In addition, we suggest that the integrated intensity (proportional to the areas of the emission band) and the centroid (or barycenter) of the transition for obtaining Ωλ should be determined in the properly Jacobian-transformed spectrum in wavenumbers (or energy). Due to the small widths of the emission bands of typical 4f-4f transitions, the areas and centroids of the bands do not depend on the transformation within the experimental uncertainties. These assessments are relevant because they validate previously determined Ωλ without the proper spectral transformation.


Macromol ◽  
2021 ◽  
Vol 1 (2) ◽  
pp. 130-154
Author(s):  
Efstathios V. Liakos ◽  
Maria Lazaridou ◽  
Georgia Michailidou ◽  
Ioanna Koumentakou ◽  
Dimitra A. Lambropoulou ◽  
...  

Chitin is mentioned as the second most abundant and important natural biopolymer in worldwide scale. The main sources for the extraction and exploitation of this natural polysaccharide polymer are crabs and shrimps. Chitosan (poly-β-(1 → 4)-2-amino-2-deoxy-d-glucose) is the most important derivative of chitin and can be used in a wide variety of applications including cosmetics, pharmaceutical and biomedical applications, food, etc., giving this substance high value-added applications. Moreover, chitosan has applications in adsorption because it contains amino and hydroxyl groups in its molecules, and can thus contribute to many possible adsorption interactions between chitosan and pollutants (pharmaceuticals/drugs, metals, phenols, pesticides, etc.). However, it must be noted that one of the most important techniques of decontamination is considered to be adsorption because it is simple, low-cost, and fast. This review emphasizes on recently published research papers (2013–2021) and briefly describes the chemical modifications of chitosan (grafting, cross-linking, etc.), for the adsorption of a variety of emerging contaminants from aqueous solutions, and characterization results. Finally, tables are depicted from selected chitosan synthetic routes and the pH effects are discussed, along with the best-fitting isotherm and kinetic models.


2021 ◽  
Vol 11 (11) ◽  
pp. 5219
Author(s):  
Yosuke Sakurai ◽  
Hirotaka Sato ◽  
Nozomu Adachi ◽  
Satoshi Morooka ◽  
Yoshikazu Todaka ◽  
...  

As a new method for evaluating single crystals and oligocrystals, pulsed neutron Bragg-dip transmission analysis/imaging method is being developed. In this study, a single Bragg-dip profile-fitting analysis method was newly developed, and applied for analyzing detailed inner information in a crystalline grain position-dependently. In the method, the spectrum profile of a single Bragg-dip is analyzed at each position over a grain. As a result, it is expected that changes in crystal orientation, mosaic spread angle and thickness of a perfect crystal can be evaluated from the wavelength, the width and the integrated intensity of the Bragg-dip, respectively. For confirming this effectiveness, the method was applied to experimental data of position-dependent Bragg-dip transmission spectra of a Si-steel plate consisting of oligocrystals. As a result, inner information of multiple crystalline grains could be visualized and evaluated. The small change in crystal orientation in a grain, about 0.4°, could be observed by imaging the Bragg-dip wavelengths. By imaging the Bragg-dip widths, both another grain and mosaic block in a grain were detected. Furthermore, imaging results of the integrated intensities of Bragg-dips were consistent with the results of Bragg-dip width imaging. These small crystallographic changes have not been observed and visualized by previous Bragg-dip analysis methods.


2021 ◽  
pp. 0309524X2199244
Author(s):  
Vineet Kumar ◽  
Ram Naresh ◽  
Amita Singh

The Unit Commitment (UC) is a significant act of optimization in day-to-day operational planning of modern power systems. After load forecasting, UC is the subsequent step in the planning process. The electric utilities decide in advance which units are to start-up, when to connect them to the network, the sequence in which the generating units should be shut down and for how long. In view of the above, this paper attempts on presenting a thorough and precise review of the recent approaches applied in optimizing UC problems, incorporating both stochastic and deterministic loads, based on various peer reviewed published research papers of reputed journals. It emphasizes on non-conventional energy and distributed power generating systems along with deregulated and regulated environment. Along with an overview, a comprehensive analysis of the UC algorithms reported in the recent past since 2015 has been discussed for the assistance of new researchers concerned with this domain.


2021 ◽  
Vol 22 (S2) ◽  
Author(s):  
Daniele D’Agostino ◽  
Pietro Liò ◽  
Marco Aldinucci ◽  
Ivan Merelli

Abstract Background High-throughput sequencing Chromosome Conformation Capture (Hi-C) allows the study of DNA interactions and 3D chromosome folding at the genome-wide scale. Usually, these data are represented as matrices describing the binary contacts among the different chromosome regions. On the other hand, a graph-based representation can be advantageous to describe the complex topology achieved by the DNA in the nucleus of eukaryotic cells. Methods Here we discuss the use of a graph database for storing and analysing data achieved by performing Hi-C experiments. The main issue is the size of the produced data and, working with a graph-based representation, the consequent necessity of adequately managing a large number of edges (contacts) connecting nodes (genes), which represents the sources of information. For this, currently available graph visualisation tools and libraries fall short with Hi-C data. The use of graph databases, instead, supports both the analysis and the visualisation of the spatial pattern present in Hi-C data, in particular for comparing different experiments or for re-mapping omics data in a space-aware context efficiently. In particular, the possibility of describing graphs through statistical indicators and, even more, the capability of correlating them through statistical distributions allows highlighting similarities and differences among different Hi-C experiments, in different cell conditions or different cell types. Results These concepts have been implemented in NeoHiC, an open-source and user-friendly web application for the progressive visualisation and analysis of Hi-C networks based on the use of the Neo4j graph database (version 3.5). Conclusion With the accumulation of more experiments, the tool will provide invaluable support to compare neighbours of genes across experiments and conditions, helping in highlighting changes in functional domains and identifying new co-organised genomic compartments.


2021 ◽  
pp. 193229682098557
Author(s):  
Alysha M. De Livera ◽  
Jonathan E. Shaw ◽  
Neale Cohen ◽  
Anne Reutens ◽  
Agus Salim

Motivation: Continuous glucose monitoring (CGM) systems are an essential part of novel technology in diabetes management and care. CGM studies have become increasingly popular among researchers, healthcare professionals, and people with diabetes due to the large amount of useful information that can be collected using CGM systems. The analysis of the data from these studies for research purposes, however, remains a challenge due to the characteristics and large volume of the data. Results: Currently, there are no publicly available interactive software applications that can perform statistical analyses and visualization of data from CGM studies. With the rapidly increasing popularity of CGM studies, such an application is becoming necessary for anyone who works with these large CGM datasets, in particular for those with little background in programming or statistics. CGMStatsAnalyser is a publicly available, user-friendly, web-based application, which can be used to interactively visualize, summarize, and statistically analyze voluminous and complex CGM datasets together with the subject characteristics with ease.


BMJ Open ◽  
2021 ◽  
Vol 11 (5) ◽  
pp. e043339
Author(s):  
Camila Olarte Parra ◽  
Lorenzo Bertizzolo ◽  
Sara Schroter ◽  
Agnès Dechartres ◽  
Els Goetghebeur

ObjectiveTo evaluate the consistency of causal statements in observational studies published in The BMJ.DesignReview of observational studies published in a general medical journal.Data sourceCohort and other longitudinal studies describing an exposure-outcome relationship published in The BMJ in 2018. We also had access to the submitted papers and reviewer reports.Main outcome measuresProportion of published research papers with ‘inconsistent’ use of causal language. Papers where language was consistently causal or non-causal were classified as ‘consistently causal’ or ‘consistently not causal’, respectively. For the ‘inconsistent’ papers, we then compared the published and submitted version.ResultsOf 151 published research papers, 60 described eligible studies. Of these 60, we classified the causal language used as ‘consistently causal’ (48%), ‘inconsistent’ (20%) and ‘consistently not causal’(32%). Eleven out of 12 (92%) of the ‘inconsistent’ papers were already inconsistent on submission. The inconsistencies found in both submitted and published versions were mainly due to mismatches between objectives and conclusions. One section might be carefully phrased in terms of association while the other presented causal language. When identifying only an association, some authors jumped to recommending acting on the findings as if motivated by the evidence presented.ConclusionFurther guidance is necessary for authors on what constitutes a causal statement and how to justify or discuss assumptions involved. Based on screening these papers, we provide a list of expressions beyond the obvious ‘cause’ word which may inspire a useful more comprehensive compendium on causal language.


2021 ◽  
Vol 14 (1) ◽  
Author(s):  
Ye Emma Zohner ◽  
Jeffrey S. Morris

Abstract Background The COVID-19 pandemic has caused major health and socio-economic disruptions worldwide. Accurate investigation of emerging data is crucial to inform policy makers as they construct viral mitigation strategies. Complications such as variable testing rates and time lags in counting cases, hospitalizations and deaths make it challenging to accurately track and identify true infectious surges from available data, and requires a multi-modal approach that simultaneously considers testing, incidence, hospitalizations, and deaths. Although many websites and applications report a subset of these data, none of them provide graphical displays capable of comparing different states or countries on all these measures as well as various useful quantities derived from them. Here we introduce a freely available dynamic representation tool, COVID-TRACK, that allows the user to simultaneously assess time trends in these measures and compare various states or countries, equipping them with a tool to investigate the potential effects of the different mitigation strategies and timelines used by various jurisdictions. Findings COVID-TRACK is a Python based web-application that provides a platform for tracking testing, incidence, hospitalizations, and deaths related to COVID-19 along with various derived quantities. Our application makes the comparison across states in the USA and countries in the world easy to explore, with useful transformation options including per capita, log scale, and/or moving averages. We illustrate its use by assessing various viral trends in the USA and Europe. Conclusion The COVID-TRACK web-application is a user-friendly analytical tool to compare data and trends related to the COVID-19 pandemic across areas in the United States and worldwide. Our tracking tool provides a unique platform where trends can be monitored across geographical areas in the coming months to watch how the pandemic waxes and wanes over time at different locations around the USA and the globe.


2017 ◽  
Vol 73 (3) ◽  
pp. 528-554 ◽  
Author(s):  
Rose Attu ◽  
Melissa Terras

Purpose Since its launch in 2007, research has been carried out on the popular social networking website Tumblr. The purpose of this paper is to identify published Tumblr-based research, classify it to understand approaches and methods, and provide methodological recommendations for others. Design/methodology/approach Research regarding Tumblr was identified. Following a review of the literature, a classification scheme was adapted and applied, to understand research focus. Papers were quantitatively classified using open coded content analysis of method, subject, approach, and topic. Findings The majority of published work relating to Tumblr concentrates on conceptual issues, followed by aspects of the messages sent. This has evolved over time. Perceived benefits are the platform’s long-form text posts, ability to track tags, and the multimodal nature of the platform. Severe research limitations are caused by the lack of demographic, geo-spatial, and temporal metadata attached to individual posts, the limited Advanced Programming Interface, restricted access to data, and the large amounts of ephemeral posts on the site. Research limitations/implications This study focusses on Tumblr: the applicability of the approach to other media is not considered. The authors focus on published research and conference papers: there will be book content which was not found using the method. Tumblr as a platform has falling user numbers which may be of concern to researchers. Practical implications The authors identify practical barriers to research on the Tumblr platform including lack of metadata and access to big data, explaining why Tumblr is not as popular as Twitter in academic studies. Social implications This paper highlights the breadth of topics covered by social media researchers, which allows us to understand popular online platforms. Originality/value There has not yet been an overarching study to look at the methods and purpose of those who study Tumblr. The authors identify Tumblr-related research papers from the first appearing in 2011 July until 2015 July. The classification derived here provides a framework that can be used to analyse social media research, and in which to position Tumblr-related work, with recommendations on benefits and limitations of the platform for researchers.


Sign in / Sign up

Export Citation Format

Share Document