Raman Spectroscopy Using dc Signal Detection and a Microcomputer: An Alternative Approach

1987 ◽  
Vol 41 (7) ◽  
pp. 1145-1147 ◽  
Author(s):  
E. Neil Lewis ◽  
Ira W. Levin

Raman spectroscopic data have been obtained with the use of direct current (dc) signal detection, an IBM PC/AT microcomputer, and commercially available software. Since photomultiplier currents of the order of nanoamps to microamps are readily attained for Raman emission under conditions of moderate laser excitation power levels (150–200 mW) and medium resolution spectral slits (1–4 cm−1), signal levels well within the domain measurable by dc signal detection techniques are achieved for a wide range of chemical and biochemical samples. Further, the digitization and signal averaging capabilities of generic data acquisition boards and microcomputers allow dc detection to yield signal-to-noise ratios competitive with those derived from complementary pulse-counting techniques.

Author(s):  
Michael D. T. McDonnell ◽  
Daniel Arnaldo ◽  
Etienne Pelletier ◽  
James A. Grant-Jacob ◽  
Matthew Praeger ◽  
...  

AbstractInteractions between light and matter during short-pulse laser materials processing are highly nonlinear, and hence acutely sensitive to laser parameters such as the pulse energy, repetition rate, and number of pulses used. Due to this complexity, simulation approaches based on calculation of the underlying physical principles can often only provide a qualitative understanding of the inter-relationships between these parameters. An alternative approach such as parameter optimisation, often requires a systematic and hence time-consuming experimental exploration over the available parameter space. Here, we apply neural networks for parameter optimisation and for predictive visualisation of expected outcomes in laser surface texturing with blind vias for tribology control applications. Critically, this method greatly reduces the amount of experimental laser machining data that is needed and associated development time, without negatively impacting accuracy or performance. The techniques presented here could be applied in a wide range of fields and have the potential to significantly reduce the time, and the costs associated with laser process optimisation.


2021 ◽  
Vol 13 (11) ◽  
pp. 2233
Author(s):  
Rasa Janušaitė ◽  
Laurynas Jukna ◽  
Darius Jarmalavičius ◽  
Donatas Pupienis ◽  
Gintautas Žilinskas

Satellite remote sensing is a valuable tool for coastal management, enabling the possibility to repeatedly observe nearshore sandbars. However, a lack of methodological approaches for sandbar detection prevents the wider use of satellite data in sandbar studies. In this paper, a novel fully automated approach to extract nearshore sandbars in high–medium-resolution satellite imagery using a GIS-based algorithm is proposed. The method is composed of a multi-step workflow providing a wide range of data with morphological nearshore characteristics, which include nearshore local relief, extracted sandbars, their crests and shoreline. The proposed processing chain involves a combination of spectral indices, ISODATA unsupervised classification, multi-scale Relative Bathymetric Position Index (RBPI), criteria-based selection operations, spatial statistics and filtering. The algorithm has been tested with 145 dates of PlanetScope and RapidEye imagery using a case study of the complex multiple sandbar system on the Curonian Spit coast, Baltic Sea. The comparison of results against 4 years of in situ bathymetric surveys shows a strong agreement between measured and derived sandbar crest positions (R2 = 0.999 and 0.997) with an average RMSE of 5.8 and 7 m for PlanetScope and RapidEye sensors, respectively. The accuracy of the proposed approach implies its feasibility to study inter-annual and seasonal sandbar behaviour and short-term changes related to high-impact events. Algorithm-provided outputs enable the possibility to evaluate a range of sandbar characteristics such as distance from shoreline, length, width, count or shape at a relevant spatiotemporal scale. The design of the method determines its compatibility with most sandbar morphologies and suitability to other sandy nearshores. Tests of the described technique with Sentinel-2 MSI and Landsat-8 OLI data show that it can be applied to publicly available medium resolution satellite imagery of other sensors.


2018 ◽  
Vol 17 (1) ◽  
pp. 160940691878345 ◽  
Author(s):  
Benjamin L. Read

Many qualitative social scientists conduct single-session interviews with large numbers of individuals so as to maximize the sample size and obtain a wide range of study participants. Yet in some circumstances, one-shot interviews cannot produce information of adequate quality, quantity, and validity. This article explains the several conditions that call for an alternative approach, serial interviewing, that entails interviewing participants on multiple occasions. This method is appropriate when studying complex or ill-defined issues, when interviews are subject to time constraints, when exploring change or variation over time, when participants are reluctant to share valid information, and when working with critical informants. A further benefit is the opportunity it provides for verifying and cross-checking information. This article delineates the general features of this technique. Through a series of encounters, the researcher builds familiarity and trust, probes a range of key topics from multiple angles, explores different facets of participants’ experiences, and learns from events that happen to take place during the interviews. This helps overcome biases associated with one-off interviews, including a tendency toward safe, simple answers in which participants flatten complexity, downplay sociopolitical conflict, and put themselves in a flattering light. This article illustrates the utility of this approach through examples drawn from published work and through a running illustration based on the author’s research on elected neighborhood leaders in Taipei. Serial interviewing helped produce relatively accurate and nuanced data concerning the power these leaders wield and their multiple roles as intermediaries between state and society.


2021 ◽  
Author(s):  
Mengxi Tan ◽  
xingyuan xu ◽  
David Moss

Abstract We report broadband RF channelizers based on integrated optical frequency Kerr micro-combs combined with passive micro-ring resonator filters, with microcombs having channel spacings of 200GHz and 49GHz. This approach to realizing RF channelizers offers reduced complexity, size, and potential cost for a wide range of applications to microwave signal detection.


2021 ◽  
Author(s):  
Stella Tsichlaki ◽  
Lefteris Koumakis ◽  
Manolis Tsiknakis

BACKGROUND Diabetes is a chronic condition that necessitates regular monitoring and self-management of the patient's blood glucose levels. People with type 1 diabetes (T1D) can live a productive life if they receive proper diabetes care. Nonetheless, a loose glycemic control might increase the risk of developing hypoglycemia. This incident can occur due to a variety of causes, such as taking additional doses of insulin, skipping meals, or over-exercising. Mainly, the symptoms of hypoglycemia range from mild dysphoria to more severe conditions, if not detected in a timely manner. OBJECTIVE In this review, we report on innovative detection techniques and tactics for identifying and preventing hypoglycemic episodes, focusing on type 1 diabetes. METHODS A systematic literature search following the PRISMA guidelines was performed focusing on the “PUBMED”, “Google Scholar”, “IEEE Xplore” and “ACM” digital libraries to find articles about technologies related to hypoglycemia detection in type 1 diabetes patients. RESULTS The presented approaches have been utilized or devised to enhance blood glucose monitoring and boost its efficacy to forecast future glucose levels, which could aid the prediction of future episodes of hypoglycemia. We detected nineteen predictive models for hypoglycemia, specifically on type 1 diabetes, utilizing a wide range of algorithmic methodologies, spanning from statistics (10%) to machine learning (52%) and deep learning (38%). The algorithms employed most are the kalman filtering and classification models (SVM, KNN, random forests). The performance of the predictive models was found overall to be satisfactory, reaching accuracies between 70% and 99% which proves that such technologies are capable to facilitate the prediction of T1D hypoglycemia. CONCLUSIONS It is evident that CGM can improve the glucose control in diabetes but predictive models for hypo- and hyper- glycemia using only mainstream noninvasive sensors such as wristbands and smartwatches are foreseen to be the next step for mHealth in T1D. Prospective studies are required to demonstrate the value of such models in real-life mHealth interventions.


2021 ◽  
Author(s):  
Angelo Odetti ◽  
Federica Braga ◽  
Fabio Brunetti ◽  
Massimo Caccia ◽  
Simone Marini ◽  
...  

<p>The IT-HR InnovaMare project, led by the Croatian Chamber of Economy, puts together policy instruments and key players for development of innovative technologies for the sustainable development of the Adriatic Sea (https://www.italy-croatia.eu/web/innovamare). The project aims at enhancing the cross-border cooperation among research, public and private stakeholders through creation of a Digital Innovation Hub (DIH). The goal is to increase effectiveness of innovation in underwater robotics and sensors to achieve and maintain a healthy and productive Adriatic Sea, as one of the crucial and strategic societal challenges existing at the cross-border level. Within InnovaMare, CNR ISMAR and INM institutes and OGS, in cooperation with the University of Zagreb and other project partners, contribute to developing a solution to access and monitor extremely shallow water by means of portable, modular, reconfigurable and highly maneuverable robotic vehicles. The identified vehicle is SWAMP, an innovative highly modular catamaran ASV recently developed by CNR-INM. SWAMP is characterised by small size, low draft, new materials, azimuth propulsion system for shallow waters and modular WiFi-based hardware&software architecture. Two SWAMP vehicles will be enhanced with a series of kits, tools and sensors to perform a series of strategic actions in the environmental monitoring of the Venice Lagoon: <br>i) An air-cushion-system-kit will be designed and developed. The vehicle will become a side-wall air-cushion-vehicle with reduction of drag and increase in speed. This will also increase the payload with a reduction of draft. <br>ii) An intelligent winch kit with a communication cable for the management of underwater sensors and tools.<br>iii) A GPS-RTK kit for highly accurate positioning in the range of centimeters.<br>iv) An Autonomous programmable device for image acquisition and processing based on the Guard1 camera. This camera acquires images content and, by means of a supervised machine learning approach, recognises/classifies features such as fish, zooplankton, seabed, infrastructures. The system is conceived for autonomous monitoring activities extended in time in fixed or mobile platforms.<br>v) A Multibeam Echo-sounder (MBES) coupled with an IMU (for pitch-roll compensation). MBES data can be used, also coupled with Cameras Imagery, through image-detection techniques for reconstruction and comprehensive knowledge of underwater environment and infrastructures. Possible analyses in coastal areas are: seabed mapping also for cultural heritage, offshore structures and resources and monitoring of biodiversity, hydrocarbon, marine litter, pollution.<br>vi) An underwater Radiometer for multiple analysis: temporal dynamics of optical properties of water; temporal dynamics of water turbidity from water reflectance; submerged vegetation and water depth mapping in optically shallow water; produce reference data for validation of satellite data.<br>vii) Automatic Nutrient Analyzer for real-time nutrient monitoring. This sensor measures nitrate with high accuracy over a wide range of environmental conditions (including extremely turbid and high CDOM conditions), from blue-ocean nitraclines to storm runoff in rivers and streams. <br>The final result of this pilot action is the creation of an innovative prototype platform for sea environmental monitoring. This will be validated through the analysis of results and draw up of guidelines for the improvement of underwater conditions.</p>


Author(s):  
Bilal Muhammad Khan ◽  
Rabia Bilal

Modulated signals used in communication systems exhibits cyclic periodicity. This is primarily due to sinusoidal product modulators, repeating preambles, coding and multiplexing in modern communication. This property of signals can be analyzed using cyclostationary analysis. SCF (Spectral correlation function) of cyclic autocorrelation (CAF) has unique features for different modulated signals and noise. Different techniques are applied to SCF for extracting features on the basis of which decision of detecting a signal or noise is made. In this chapter, study and analysis of different modulated signals used in satellite communication is presented using SCF. Also comparison of several signal detection techniques is provided on the basis of utilizing unique feature exhibit by a normalized vector calculated on SCF along frequency axis. Moreover a signal detection technique is also proposed which identifies the presence of a signal or noise in the analyzed data within the defined threshold limits.


2016 ◽  
Vol 10 (4) ◽  
pp. 1-32 ◽  
Author(s):  
Abdelaziz Amara Korba ◽  
Mehdi Nafaa ◽  
Salim Ghanemi

In this paper, a cluster-based hybrid security framework called HSFA for ad hoc networks is proposed and evaluated. The proposed security framework combines both specification and anomaly detection techniques to efficiently detect and prevent wide range of routing attacks. In the proposed hierarchical architecture, cluster nodes run a host specification-based intrusion detection system to detect specification violations attacks such as fabrication, replay, etc. While the cluster heads run an anomaly-based intrusion detection system to detect wormhole and rushing attacks. The proposed specification-based detection approach relies on a set of specifications automatically generated, while anomaly-detection uses statistical techniques. The proposed security framework provides an adaptive response against attacks to prevent damage to the network. The security framework is evaluated by simulation in presence of malicious nodes that can launch different attacks. Simulation results show that the proposed hybrid security framework performs significantly better than other existing mechanisms.


Sign in / Sign up

Export Citation Format

Share Document