In-Network Computing Powered Mobile Edge: Toward High Performance Industrial IoT

IEEE Network ◽  
2020 ◽  
pp. 1-7
Author(s):  
Tianle Mai ◽  
Haipeng Yao ◽  
Song Guo ◽  
Yunjie Liu
2013 ◽  
Vol 837 ◽  
pp. 651-656
Author(s):  
Gabriel Raicu ◽  
Alexandra Raicu

The authors present the development of a scientific cloud computing environment (SCCE) for engineering and business simulations that offers high performance computation capability. The software platform consists of a scalable pool of virtual machines running a UNIX-like (Linux) or UNIX-derivative (FreeBSD) operating systems using specialised software based on modelling engineering processes and focused on business training and predictive analytics using simulations. The use of advanced engineering simulation technology allows engineers to understand and predict the future performance of complex structures and systems designs which can be optimized to reduce risk, improve performance or enhance survivability. A key component of cloud computing in Universities as well as in other research centers: they can share computing resources beyond their technical capabilities. With cloud computing, this allows them all to have access to large scales processing power based on KVM (Kernel based Virtual Machine). Our solution provides a more productive approach: a full scale virtualised computer with scalable storage space and instantly upgradable processing capability. It has more flexibility than other network computing systems and saves precious research time and money. Unlike the existing systems, the scientific community can receive support from a large number of specialists who may contribute by in a collaborative way.


2013 ◽  
Vol 2013 (1) ◽  
pp. 000223-000227 ◽  
Author(s):  
Zhuowen Sun ◽  
Kevin Chen ◽  
Richard Crisp

The recent explosion of thin notebooks and tablets has challenged the IC packaging industry to come up with new solutions of DRAM integration onto motherboard. Beyond traditional SO-DIMMs, innovative memory solutions should perform well at high speed (1600 MT/s) with much reduced footprint and z-height, while leveraging current manufacturing infrastructure for lower cost and also enabling simpler and cheaper motherboard design. To accomplish all the goals stated above for high-performance on-board memory applications, we showed a new DIMM-in-a-Package (DIAP) technology. This 22.5×17.5×1.2mm quad-die face-down (QFD) part has four standard center bond DDR3L dies (each ×16) face-down, which are wire-bonded to the bottom layer of the 407-ball BGA package. This judiciously designed package places data nets at the peripheral and command/control/address nets in the middle of the BGA. As such, motherboard design and layout were substantially simplified to allow the use of low-cost non-HDI Type 3 board for signal integrity performance comparable to expensive HDI boards. The QFD™ ball assignment could accommodate future memory density expansion and different memory type (e.g. LPDDR3, DDR4). It also enables dual-rank operations in each channel when double-sided assembly is used. We successfully demonstrated in production build that 1GB ×64 DDR3L QFD with data rate of 1600 MT/s can be achieved on a Type 3 motherboard for the Intel Haswell mobile platform in dual-channel dual-rank operation. A balanced-T Command/Address topology between the processor and the memory was implemented in a DELL XPS 12 Ultrabook. Channel simulations including chip, package and board were performed. We also conducted cross-talk analysis up to 9 aggressors to take into account the timing impact from the dense routing inside QFD. Layout optimization techniques for best signal integrity, such as trace length matching and stub length minimization, were discussed in detail and applied to both package and motherboard design. Lastly, we also presented and discussed DIAPs currently under study with different memory bus topologies for even higher data rate up to 2400 MT/s using the same QFD technology. Our results and analysis demonstrated DIAP using wirebond-based QFD technology as a viable candidate for the compact, low-cost, high-performance on-board memory solution. We have identified several key aspects of DIAP architecture design and physical layout that are strongly impacting the SI of QFD parts at rate >1600 MT/s and that could be optimized for DDR4 operations. QFD DIAP can become an attractive low-cost, high-performance option for many OEMs and ODMs in various mobile, personal and network computing platforms.


1997 ◽  
Vol 12 (5) ◽  
pp. 451-459
Author(s):  
V Strumpen ◽  
B Ramkumar ◽  
T.L Casavant ◽  
S.M Reddy

2019 ◽  
Vol 15 (6) ◽  
pp. 3632-3641 ◽  
Author(s):  
Yang Xu ◽  
Ju Ren ◽  
Guojun Wang ◽  
Cheng Zhang ◽  
Jidian Yang ◽  
...  

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5199
Author(s):  
Asier Atutxa ◽  
David Franco ◽  
Jorge Sasiain ◽  
Jasone Astorga ◽  
Eduardo Jacob

Industrial networks are introducing Internet of Things (IoT) technologies in their manufacturing processes in order to enhance existing methods and obtain smarter, greener and more effective processes. Global predictions forecast a massive widespread of IoT technology in industrial sectors in the near future. However, these innovations face several challenges, such as achieving short response times in case of time-critical applications. Concepts like in-network computing or edge computing can provide adequate communication quality for these industrial environments, and data plane programming has been proved as a useful mechanism for their implementation. Specifically, P4 language is used for the definition of the behavior of programmable switches and network elements. This paper presents a solution for industrial IoT (IIoT) network communications to reduce response times using in-network computing through data plane programming and P4. Our solution processes Message Queuing Telemetry Transport (MQTT) packets sent by a sensor in the data plane and generates an alarm in case of exceeding a threshold in the measured value. The implementation has been tested in an experimental facility, using a Netronome SmartNIC as a P4 programmable network device. Response times are reduced by 74% while processing, and delay introduced by the P4 network processing is insignificant.


2000 ◽  
Author(s):  
Christopher J. Freitas ◽  
Derrick B. Coffin ◽  
Richard L. Murphy

Abstract Distributed parallel computing using message-passing techniques on Networks of Workstations (NOW) has achieved widespread use in the context of Local Area Networks (LAN). Recently, the concept of Grid-based computing using Wide Area Networks (WAN) has been proposed as a general solution to distributed high performance computing. The use of computers and resources at different geographic locations connected by a Wide Area Network and executing a real application introduces additional variables that potentially complicate the efficient use of these resources. Presented here are the results of a study that begins to characterize the performance issues of a WAN-based NOW, connecting resources that span an international border.


Sign in / Sign up

Export Citation Format

Share Document