scholarly journals Deep Learning‐Enabled Label‐Free On‐Chip Detection and Selective Extraction of Cell Aggregate‐Laden Hydrogel Microcapsules

Small ◽  
2021 ◽  
Vol 17 (24) ◽  
pp. 2102868
Author(s):  
Alisa M. White ◽  
Yuntian Zhang ◽  
James G. Shamul ◽  
Jiangsheng Xu ◽  
Elyahb A. Kwizera ◽  
...  
Small ◽  
2021 ◽  
pp. 2100491
Author(s):  
Alisa M. White ◽  
Yuntian Zhang ◽  
James G. Shamul ◽  
Jiangsheng Xu ◽  
Elyahb A. Kwizera ◽  
...  

ACS Sensors ◽  
2018 ◽  
Vol 3 (2) ◽  
pp. 410-417 ◽  
Author(s):  
Mingrui Sun ◽  
Patrick Durkin ◽  
Jianrong Li ◽  
Thomas L. Toth ◽  
Xiaoming He

Author(s):  
Yichen Wu ◽  
Ayfer Calis ◽  
Yi Luo ◽  
Cheng Chen ◽  
Maxwell Lutton ◽  
...  

Lab on a Chip ◽  
2021 ◽  
Author(s):  
Ningquan Wang ◽  
Ruxiu Liu ◽  
Norh Asmare ◽  
Chia-Heng Chu ◽  
Ozgun Civelekoglu ◽  
...  

An adaptive microfluidic system changing its operational state in real-time based on cell measurements through an on-chip electrical sensor network.


2021 ◽  
Vol 64 (6) ◽  
pp. 107-116
Author(s):  
Yakun Sophia Shao ◽  
Jason Cemons ◽  
Rangharajan Venkatesan ◽  
Brian Zimmer ◽  
Matthew Fojtik ◽  
...  

Package-level integration using multi-chip-modules (MCMs) is a promising approach for building large-scale systems. Compared to a large monolithic die, an MCM combines many smaller chiplets into a larger system, substantially reducing fabrication and design costs. Current MCMs typically only contain a handful of coarse-grained large chiplets due to the high area, performance, and energy overheads associated with inter-chiplet communication. This work investigates and quantifies the costs and benefits of using MCMs with finegrained chiplets for deep learning inference, an application domain with large compute and on-chip storage requirements. To evaluate the approach, we architected, implemented, fabricated, and tested Simba, a 36-chiplet prototype MCM system for deep-learning inference. Each chiplet achieves 4 TOPS peak performance, and the 36-chiplet MCM package achieves up to 128 TOPS and up to 6.1 TOPS/W. The MCM is configurable to support a flexible mapping of DNN layers to the distributed compute and storage units. To mitigate inter-chiplet communication overheads, we introduce three tiling optimizations that improve data locality. These optimizations achieve up to 16% speedup compared to the baseline layer mapping. Our evaluation shows that Simba can process 1988 images/s running ResNet-50 with a batch size of one, delivering an inference latency of 0.50 ms.


Electronics ◽  
2021 ◽  
Vol 10 (6) ◽  
pp. 689
Author(s):  
Tom Springer ◽  
Elia Eiroa-Lledo ◽  
Elizabeth Stevens ◽  
Erik Linstead

As machine learning becomes ubiquitous, the need to deploy models on real-time, embedded systems will become increasingly critical. This is especially true for deep learning solutions, whose large models pose interesting challenges for target architectures at the “edge” that are resource-constrained. The realization of machine learning, and deep learning, is being driven by the availability of specialized hardware, such as system-on-chip solutions, which provide some alleviation of constraints. Equally important, however, are the operating systems that run on this hardware, and specifically the ability to leverage commercial real-time operating systems which, unlike general purpose operating systems such as Linux, can provide the low-latency, deterministic execution required for embedded, and potentially safety-critical, applications at the edge. Despite this, studies considering the integration of real-time operating systems, specialized hardware, and machine learning/deep learning algorithms remain limited. In particular, better mechanisms for real-time scheduling in the context of machine learning applications will prove to be critical as these technologies move to the edge. In order to address some of these challenges, we present a resource management framework designed to provide a dynamic on-device approach to the allocation and scheduling of limited resources in a real-time processing environment. These types of mechanisms are necessary to support the deterministic behavior required by the control components contained in the edge nodes. To validate the effectiveness of our approach, we applied rigorous schedulability analysis to a large set of randomly generated simulated task sets and then verified the most time critical applications, such as the control tasks which maintained low-latency deterministic behavior even during off-nominal conditions. The practicality of our scheduling framework was demonstrated by integrating it into a commercial real-time operating system (VxWorks) then running a typical deep learning image processing application to perform simple object detection. The results indicate that our proposed resource management framework can be leveraged to facilitate integration of machine learning algorithms with real-time operating systems and embedded platforms, including widely-used, industry-standard real-time operating systems.


2021 ◽  
Author(s):  
Mahyar Salek ◽  
Hou-pu Chou ◽  
Prashast Khandelwal ◽  
Krishna P. Pant ◽  
Thomas J. Musci ◽  
...  

2021 ◽  
Author(s):  
Zoltán Göröcs ◽  
David Baum ◽  
Fang Song ◽  
Kevin de Haan ◽  
Hatice Ceylan Koydemir ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document