Review of analog-to-information converters

2019 ◽  
pp. 6-12
Author(s):  
M. N. Polunin ◽  
A. V. Bykova

The implementation of high‑throughput systems with the traditional approach to the discretization of the analog signal according to the Kotelnikov theorem is faced with the problems of high power consumption and the need to store and transfer large amounts of data. An alternative approach to sampling and processing information is based on advances in the compressed sampling theory. The paper provides a brief overview of the main provisions of this theory and considers examples of its use in practice for the implementation of information reading systems – analog‑to‑information converters. The purpose of these devices is to reduce the pressure on conventional analog‑to‑digital converters, to reduce the sampling rate and the amount of output data. The main architectures of analog‑information converters are considered: non‑uniform sampling, random filter, random demodulator, modulated wideband converter, compressive multiplexer, random modulator pre‑integrator, spread spectrum random modulator pre‑integrator.

Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2923 ◽  
Author(s):  
Zheng ◽  
Tong ◽  
Li ◽  
Tao ◽  
Song ◽  
...  

Design of underwater acoustic (UWA) modems for compact-sized, underwater platforms such as autonomous underwater vehicles (AUVs) is challenging because of the practical requirement to keep an engineering balance between the performance and the system overhead. Considering this type of mobile communication scenario, Doppler spread as well as the multipath draws substantial attention in implementing the system’s design and engineering. Specifically, for a small AUV, the large computational complexity of real-time resampling for the classic Doppler correction poses significant difficulty for the limited capability of the low-cost processor. In this paper, by adopting an adjustable AD (analog-to-digital) sampling rate, a Doppler compensation approach is proposed to enable low-complexity hardware implementation. Based on this, a direct sequence spread spectrum (DSSS) acoustic modem is designed for a low-cost, small-sized AUV. Meanwhile, the performance evaluation of this acoustic modem is conducted in terms of the robustness upon varying Doppler as well as AUV integration. Finally, experimental results performed on a commercial, small-sized AUV under different speeds are reported to verify the effectiveness of the proposed acoustic modem.


Author(s):  
Caroline M. Leaf ◽  
Brenda Louw ◽  
Isabel Uys

The current article suggests that alternatives to the current traditional learning methods are essentials if learning institutions are to provide people with effective life skills that enable them to be autonomous learners. This suggestion is based on a body of literature on alternative learning which stresses the need for fundamental change and hence, a paradigm shift in perception of learning in order to cope with the world-wide information explosion. The alternative non-traditional approach proposed in geodesic learning which stresses learning how to learn and self-directed inquiry as essential life skills which enable systems as well as the people in the systems to bring about their own transformation in response to changing situations and requirements. The current article discusses an alternative service delivery model, the geodesic information processing model, which falls within the realms of the geodesic philosophy. The implications of this alternative approach for the speech-language therapist are discussed.


Author(s):  
Chunlong Wu ◽  
Benjamin Ciavola ◽  
John Gershenson

Function-based design is the traditional approach in engineering design theory, proving useful and practical in many cases but showing limitations in others. Affordance-based design is an alternative approach that attempts to address some of function theory’s limitations by focusing attention on the interactions between systems. This paper compares function-based design with affordance-based design by examining their philosophies, tools, abilities, and suitability along a number of dimensions. We conclude that the approaches are compatible and suggest future work to realize their integration.


2015 ◽  
Vol 32 (3) ◽  
pp. 202-208 ◽  
Author(s):  
Paul R. Martin ◽  
Moira Callan ◽  
Archana Kaur ◽  
Karen Gregg

The traditional approach to headache trigger management is to advise avoidance of all triggers, but we have advocated an alternative approach called ‘Learning to Cope with Triggers’ (LCT), in which the objective is to desensitise headache sufferers to some triggers or to build up tolerance for the triggers, using exposure techniques. A recent publication established the efficacy of this approach to trigger management. Reported here are three cases to illustrate how LCT is used in practice. Two cases were male and one was female, with ages ranging from 32 to 67 years. The headache diagnoses were frequent episodic tension-type headache, migraine without aura, and chronic tension-type headache; all had had headaches since childhood/adolescence. The headache triggers that were the focus of the intervention were heat, tiredness, and stress/anger. Post-treatment, changes in the capacity of the triggers to elicit headaches were reported in all three cases. Reductions in headaches from pre- to post-treatment, and from pre- to 4-month follow-up, were: case 1, 69% and 60% respectively; case 2, 76% and 80% respectively; and case 3, 73% and 61% respectively. Decreases in medication consumption, and enhanced self-efficacy were also recorded.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jihwan Boo ◽  
Mark D. Hammig ◽  
Manhee Jeong

AbstractDual particle imaging, in which both neutrons and gamma-rays in the environment can be individually characterized, is particularly attractive for monitoring mixed radiation emitters such as special nuclear materials (SNM). Effective SNM localization and detection benefits from high instrument sensitivity so that real-time imaging or imaging with a limited number of acquired events is enabled. For portable applications, one also desires a dual particle imager (DPI) that is readily deployable. We have developed a hand-held type DPI equipped with a pixelated stilbene-silicon photomultiplier (SiPM) array module and low sampling-rate analog-to-digital converters (ADCs) processed via a multiplexed readout. The stilbene-SiPM array (12 × 12 pixels) is capable of effectively performing pulse shape discrimination (PSD) between gamma-ray and neutron events and neutron/gamma-ray source localization on the imaging plane, as demonstrated with 252Cf neutron/gamma and 137Cs gamma-ray sources. The low sampling rate ADCs connected to the stilbene-SiPM array module result in a compact instrument with high sensitivity that provides a gamma-ray image of a 137Cs source, producing 6.4 μR/h at 1 m, in less than 69 s. A neutron image for a 3.5 × 105 n/s 252Cf source can also be obtained in less than 6 min at 1 m from the center of the system. The instrument images successfully with field of view of 50° and provides angular resolution of 6.8°.


1982 ◽  
Vol 26 (8) ◽  
pp. 721-721
Author(s):  
Dudley G. Letbetter

A ten-step approach for developing a comprehensive but concise, design-oriented handbook of human performance is proposed, with emphasis on the first two steps. The ten steps are: 1. Identify and define classes and subclasses of human performance. 2. Develop a concise format for abstracting information for each lowest level subclass. 3. Establish the need for a handbook. 4. Prepare an abstract for each literature source covering a lowest level subclass. 5. Develop a concise format, including design recommendations, for summarizing each lowest level subclass. 6. Prepare a summary, including design recommendations, for each lowest level subclass. 7. Collate summaries and abstracts. 8. Prepare table of contents and index. 9. Publish handbook. 10. Distribute handbook. For Step 1, a functional rather than traditional approach is presented. Classes of human performance are identified and defined in terms of basic functions suitable for all applications. Five major functions are identified, subdivided and defined. The five major functions are: R. Receiving information: Receiving all information, except receiving communicated information, which is a subclass of communicating, the second major function. Receiving information includes perception of all natural-environmental and artificially-displayed information (other than perception of directly - or indirectly - communicated information), and input loading and interacting, which also are considerations in communicating. C. Communicating: All exchanging of information between humans by a system or systems of symbols, signs and/or behavior. Communicating consists of emitting and receiving communicated information: oral or non-oral, direct or indirect, unaided or aided, and voluntary or involuntary. P. Processing information: Operating on and treating received information; basic handling of perceived information. Processing information includes storing and retrieving information (recognizing, recalling, reproducing), acquiring and using concepts (acquiring, symbolizing, defining), altering information (calculating and computing, logical and mathematical transforming, encoding and decoding), reasoning (intuiting, inductive and deductive explicit reasoning), imagining (anticipatory, creative, fanciful). M. Managing personal performance: Guiding and directing one's own performance. The “executive” function, which is concerned with carrying into effect and integrating the four other major functions. Managing personal performance includes valuing, making decisions, and initiating and sustaining personal performance. A. Acting: Carrying into effect; changing system physical states. The fifth major function covers producing physical effects. The means is exerting force within oneself and/or on other objects; the direct or indirect result or output is work. Acting includes direct acting (e.g., manually lifting an object) and indirect acting (e.g., operating the controls of an overhead crane lifting an object).


2021 ◽  
Vol 12 (3) ◽  
pp. 140-165
Author(s):  
Mahdi Khosravy ◽  
Thales Wulfert Cabral ◽  
Max Mateus Luiz ◽  
Neeraj Gupta ◽  
Ruben Gonzalez Crespo

Compressive sensing has the ability of reconstruction of signal/image from the compressive measurements which are sensed with a much lower number of samples than a minimum requirement by Nyquist sampling theorem. The random acquisition is widely suggested and used for compressive sensing. In the random acquisition, the randomness of the sparsity structure has been deployed for compressive sampling of the signal/image. The article goes through all the literature up to date and collects the main methods, and simply described the way each of them randomly applies the compressive sensing. This article is a comprehensive review of random acquisition techniques in compressive sensing. Theses techniques have reviews under the main categories of (1) random demodulator, (2) random convolution, (3) modulated wideband converter model, (4) compressive multiplexer diagram, (5) random equivalent sampling, (6) random modulation pre-integration, (7) quadrature analog-to-information converter, (8) randomly triggered modulated-wideband compressive sensing (RT-MWCS).


Sign in / Sign up

Export Citation Format

Share Document