scholarly journals Smart Image Sensor with High-speed High-sensitivity ID Beacon Detection for Augmented Reality System

Author(s):  
Yusuke Oike ◽  
Makoto Ikeda ◽  
Kunihiro Asada
2021 ◽  
Vol 30 (1) ◽  
pp. 61-65
Author(s):  
Sang-Hwan Kim ◽  
Hyeunwoo Kwen ◽  
Juneyoung Jang ◽  
Young-Mo Kim ◽  
Jang-Kyoo Shin

2020 ◽  
Author(s):  
Jill Juneau ◽  
Guillaume Duret ◽  
Joshua P. Chu ◽  
Alexander V. Rodriguez ◽  
Savva Morozov ◽  
...  

AbstractObserving the activity of large populations of neurons in vivo is critical for understanding brain function and dysfunction. The use of fluorescent genetically-encoded calcium indicators (GECIs) in conjunction with miniaturized microscopes is an exciting emerging toolset for recording neural activity in unrestrained animals. Despite their potential, current miniaturized microscope designs are limited by using image sensors with low frame rates, sensitivity, and resolution. Beyond GECIs, there are many neuroscience applications which would benefit from the use of other emerging neural indicators, such as fluorescent genetically-encoded voltage indicators (GEVIs) that have faster temporal resolution to match neuron spiking, yet, require imaging at high speeds to properly sample the activity-dependent signals. We integrated an advanced CMOS image sensor into a popular open-source miniaturized microscope platform. MiniFAST is a fast and sensitive miniaturized microscope capable of 1080p video, 1.5 µm resolution, frame rates up to 500 Hz and high gain ability (up to 70 dB) to image in extremely low light conditions. We report results of high speed 500 Hz in vitro imaging of a GEVI and ∼300 Hz in vivo imaging of transgenic Thy1-GCaMP6f mice. Finally, we show the potential for a reduction in photobleaching by using high gain imaging with ultra-low excitation light power (0.05 mW) at 60 Hz frame rates while still resolving Ca2+ spiking activity. Our results extend miniaturized microscope capabilities in high-speed imaging, high sensitivity and increased resolution opening the door for the open-source community to use fast and dim neural indicators.


Sensors ◽  
2018 ◽  
Vol 18 (9) ◽  
pp. 3112 ◽  
Author(s):  
Vu Dao ◽  
Nguyen Ngo ◽  
Anh Nguyen ◽  
Kazuhiro Morimoto ◽  
Kazuhiro Shimonomura ◽  
...  

The paper presents an ultra-high-speed image sensor for motion pictures of reproducible events emitting very weak light. The sensor is backside-illuminated. Each pixel is equipped with the multiple collection gates (MCG) at the center of the front side. Each collection gate is connected to an in-pixel large memory unit, which can accumulate image signals captured by repetitive imaging. The combination of the backside illumination, image signal accumulation, and slow readout from the in-pixel signal storage after an image capturing operation offers a very high sensitivity. Pipeline signal transfer from the MCG to the in-pixel memory units enables the sensor to achieve a large frame count and a very high frame rate at the same time. A test sensor was fabricated with a pixel count of 32 × 32 pixels. Each pixel is equipped with four collection gates, each connected to a memory unit with 305 elements; thus, with a total frame count of 1220 (305 × 4) frames. The test camera achieved 25 Mfps, while the sensor was designed to operate at 50 Mfps.


2019 ◽  
Author(s):  
Zachary VanAernum ◽  
Florian Busch ◽  
Benjamin J. Jones ◽  
Mengxuan Jia ◽  
Zibo Chen ◽  
...  

It is important to assess the identity and purity of proteins and protein complexes during and after protein purification to ensure that samples are of sufficient quality for further biochemical and structural characterization, as well as for use in consumer products, chemical processes, and therapeutics. Native mass spectrometry (nMS) has become an important tool in protein analysis due to its ability to retain non-covalent interactions during measurements, making it possible to obtain protein structural information with high sensitivity and at high speed. Interferences from the presence of non-volatiles are typically alleviated by offline buffer exchange, which is timeconsuming and difficult to automate. We provide a protocol for rapid online buffer exchange (OBE) nMS to directly screen structural features of pre-purified proteins, protein complexes, or clarified cell lysates. Information obtained by OBE nMS can be used for fast (<5 min) quality control and can further guide protein expression and purification optimization.


Sensors ◽  
2021 ◽  
Vol 21 (9) ◽  
pp. 3061
Author(s):  
Alice Lo Valvo ◽  
Daniele Croce ◽  
Domenico Garlisi ◽  
Fabrizio Giuliano ◽  
Laura Giarré ◽  
...  

In recent years, we have assisted with an impressive advance in augmented reality systems and computer vision algorithms, based on image processing and artificial intelligence. Thanks to these technologies, mainstream smartphones are able to estimate their own motion in 3D space with high accuracy. In this paper, we exploit such technologies to support the autonomous mobility of people with visual disabilities, identifying pre-defined virtual paths and providing context information, reducing the distance between the digital and real worlds. In particular, we present ARIANNA+, an extension of ARIANNA, a system explicitly designed for visually impaired people for indoor and outdoor localization and navigation. While ARIANNA is based on the assumption that landmarks, such as QR codes, and physical paths (composed of colored tapes, painted lines, or tactile pavings) are deployed in the environment and recognized by the camera of a common smartphone, ARIANNA+ eliminates the need for any physical support thanks to the ARKit library, which we exploit to build a completely virtual path. Moreover, ARIANNA+ adds the possibility for the users to have enhanced interactions with the surrounding environment, through convolutional neural networks (CNNs) trained to recognize objects or buildings and enabling the possibility of accessing contents associated with them. By using a common smartphone as a mediation instrument with the environment, ARIANNA+ leverages augmented reality and machine learning for enhancing physical accessibility. The proposed system allows visually impaired people to easily navigate in indoor and outdoor scenarios simply by loading a previously recorded virtual path and providing automatic guidance along the route, through haptic, speech, and sound feedback.


Sign in / Sign up

Export Citation Format

Share Document