scholarly journals An engineering perspective on the development and evolution of implantable cardiac monitors in free-living animals

2021 ◽  
Vol 376 (1830) ◽  
pp. 20200217 ◽  
Author(s):  
Timothy G. Laske ◽  
David L. Garshelis ◽  
Tinen L. Iles ◽  
Paul A. Iaizzo

The latest technologies associated with implantable physiological monitoring devices can record multiple channels of data (including: heart rates and rhythms, activity, temperature, impedance and posture), and coupled with powerful software applications, have provided novel insights into the physiology of animals in the wild. This perspective details past challenges and lessons learned from the uses and developments of implanted biologgers designed for human clinical application in our research on free-ranging American black bears ( Ursus americanus ). In addition, we reference other research by colleagues and collaborators who have leveraged these devices in their work, including: brown bears ( Ursus arctos ), grey wolves ( Canis lupus ), moose ( Alces alces ), maned wolves ( Chrysocyon brachyurus ) and southern elephant seals ( Mirounga leonina ). We also discuss the potentials for applications of such devices across a range of other species. To date, the devices described have been used in fifteen different wild species, with publications pending in many instances. We have focused our physiological research on the analyses of heart rates and rhythms and thus special attention will be paid to this topic. We then discuss some major expected step changes such as improvements in sensing algorithms, data storage, and the incorporation of next-generation short-range wireless telemetry. The latter provides new avenues for data transfer, and when combined with cloud-based computing, it not only provides means for big data storage but also the ability to readily leverage high-performance computing platforms using artificial intelligence and machine learning algorithms. These advances will dramatically increase both data quantity and quality and will facilitate the development of automated recognition of extreme physiological events or key behaviours of interest in a broad array of environments, thus further aiding wildlife monitoring and management. This article is part of the theme issue ‘Measuring physiology in free-living animals (Part I)’.

Author(s):  
Kyle Chard ◽  
Eli Dart ◽  
Ian Foster ◽  
David Shifflett ◽  
Steven Tuecke ◽  
...  

We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance Science DMZs and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe how to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.


2017 ◽  
Author(s):  
Kyle Chard ◽  
Eli Dart ◽  
Ian Foster ◽  
David Shifflett ◽  
Steven Tuecke ◽  
...  

We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance Science DMZs and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe how to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.


2021 ◽  
Author(s):  
Zhehao Xu ◽  
Xiao Su ◽  
Sicong Hua ◽  
Jiwei Zhai ◽  
Sannian Song ◽  
...  

Abstract For high-performance data centers, huge data transfer, reliable data storage and emerging in-memory computing require memory technology with the combination of accelerated access, large capacity and persistence. As for phase-change memory, the Sb-rich compounds Sb7Te3 and GeSb6Te have demonstrated fast switching speed and considerable difference of phase transition temperature. A multilayer structure is built up with the two compounds to reach three non-volatile resistance states. Sequential phase transition in a relationship with the temperature is confirmed to contribute to different resistance states with sufficient thermal stability. With the verification of nanoscale confinement for the integration of Sb7Te3/GeSb6Te multilayer thin film, T-shape PCM cells are fabricated and two SET operations are executed with 40 ns-width pulses, exhibiting good potential for the multi-level PCM candidate.


2017 ◽  
Author(s):  
Kyle Chard ◽  
Eli Dart ◽  
Ian Foster ◽  
David Shifflett ◽  
Steven Tuecke ◽  
...  

We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance Science DMZs and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe how to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.


Author(s):  
Jason Williams

AbstractPosing complex research questions poses complex reproducibility challenges. Datasets may need to be managed over long periods of time. Reliable and secure repositories are needed for data storage. Sharing big data requires advance planning and becomes complex when collaborators are spread across institutions and countries. Many complex analyses require the larger compute resources only provided by cloud and high-performance computing infrastructure. Finally at publication, funder and publisher requirements must be met for data availability and accessibility and computational reproducibility. For all of these reasons, cloud-based cyberinfrastructures are an important component for satisfying the needs of data-intensive research. Learning how to incorporate these technologies into your research skill set will allow you to work with data analysis challenges that are often beyond the resources of individual research institutions. One of the advantages of CyVerse is that there are many solutions for high-powered analyses that do not require knowledge of command line (i.e., Linux) computing. In this chapter we will highlight CyVerse capabilities by analyzing RNA-Seq data. The lessons learned will translate to doing RNA-Seq in other computing environments and will focus on how CyVerse infrastructure supports reproducibility goals (e.g., metadata management, containers), team science (e.g., data sharing features), and flexible computing environments (e.g., interactive computing, scaling).


MRS Bulletin ◽  
2006 ◽  
Vol 31 (4) ◽  
pp. 324-328 ◽  
Author(s):  
Lisa Dhar

AbstractHolographic storage is considered a promising successor to currently available optical storage technologies. Enabling significant gains in both data transfer rates and storage densities, holographic storage and its capabilities have gained a great deal of recent attention.One of the primary challenges in the advancement of holographic storage has been the development of suitable recording materials.In this article, we provide a brief introduction to holographic storage and its potential advantages over current technologies, outline the requirements for recording materials, and survey candidate materials.We end by highlighting recent progress in photopolymer materials that has produced materials that satisfy the requirements for holographic storage and have enabled significant demonstrations of the viability of this technology.


2018 ◽  
Vol 4 ◽  
pp. e144 ◽  
Author(s):  
Kyle Chard ◽  
Eli Dart ◽  
Ian Foster ◽  
David Shifflett ◽  
Steven Tuecke ◽  
...  

We describe best practices for providing convenient, high-speed, secure access to large data via research data portals. We capture these best practices in a new design pattern, the Modern Research Data Portal, that disaggregates the traditional monolithic web-based data portal to achieve orders-of-magnitude increases in data transfer performance, support new deployment architectures that decouple control logic from data storage, and reduce development and operations costs. We introduce the design pattern; explain how it leverages high-performance data enclaves and cloud-based data management services; review representative examples at research laboratories and universities, including both experimental facilities and supercomputer sites; describe how to leverage Python APIs for authentication, authorization, data transfer, and data sharing; and use coding examples to demonstrate how these APIs can be used to implement a range of research data portal capabilities. Sample code at a companion web site, https://docs.globus.org/mrdp, provides application skeletons that readers can adapt to realize their own research data portals.


2018 ◽  
Vol 14 (2) ◽  
pp. 127-138
Author(s):  
Asif Banka ◽  
Roohie Mir

The advancements in modern day computing and architectures focus on harnessing parallelism and achieve high performance computing resulting in generation of massive amounts of data. The information produced needs to be represented and analyzed to address various challenges in technology and business domains. Radical expansion and integration of digital devices, networking, data storage and computation systems are generating more data than ever. Data sets are massive and complex, hence traditional learning methods fail to rescue the researchers and have in turn resulted in adoption of machine learning techniques to provide possible solutions to mine the information hidden in unseen data. Interestingly, deep learning finds its place in big data applications. One of major advantages of deep learning is that it is not human engineered. In this paper, we look at various machine learning algorithms that have already been applied to big data related problems and have shown promising results. We also look at deep learning as a rescue and solution to big data issues that are not efficiently addressed using traditional methods. Deep learning is finding its place in most applications where we come across critical and dominating 5Vs of big data and is expected to perform better.


Biomimetics ◽  
2021 ◽  
Vol 6 (2) ◽  
pp. 32
Author(s):  
Tomasz Blachowicz ◽  
Jacek Grzybowski ◽  
Pawel Steblinski ◽  
Andrea Ehrmann

Computers nowadays have different components for data storage and data processing, making data transfer between these units a bottleneck for computing speed. Therefore, so-called cognitive (or neuromorphic) computing approaches try combining both these tasks, as is done in the human brain, to make computing faster and less energy-consuming. One possible method to prepare new hardware solutions for neuromorphic computing is given by nanofiber networks as they can be prepared by diverse methods, from lithography to electrospinning. Here, we show results of micromagnetic simulations of three coupled semicircle fibers in which domain walls are excited by rotating magnetic fields (inputs), leading to different output signals that can be used for stochastic data processing, mimicking biological synaptic activity and thus being suitable as artificial synapses in artificial neural networks.


Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 656
Author(s):  
Xavier Larriva-Novo ◽  
Víctor A. Villagrá ◽  
Mario Vega-Barbas ◽  
Diego Rivera ◽  
Mario Sanz Rodrigo

Security in IoT networks is currently mandatory, due to the high amount of data that has to be handled. These systems are vulnerable to several cybersecurity attacks, which are increasing in number and sophistication. Due to this reason, new intrusion detection techniques have to be developed, being as accurate as possible for these scenarios. Intrusion detection systems based on machine learning algorithms have already shown a high performance in terms of accuracy. This research proposes the study and evaluation of several preprocessing techniques based on traffic categorization for a machine learning neural network algorithm. This research uses for its evaluation two benchmark datasets, namely UGR16 and the UNSW-NB15, and one of the most used datasets, KDD99. The preprocessing techniques were evaluated in accordance with scalar and normalization functions. All of these preprocessing models were applied through different sets of characteristics based on a categorization composed by four groups of features: basic connection features, content characteristics, statistical characteristics and finally, a group which is composed by traffic-based features and connection direction-based traffic characteristics. The objective of this research is to evaluate this categorization by using various data preprocessing techniques to obtain the most accurate model. Our proposal shows that, by applying the categorization of network traffic and several preprocessing techniques, the accuracy can be enhanced by up to 45%. The preprocessing of a specific group of characteristics allows for greater accuracy, allowing the machine learning algorithm to correctly classify these parameters related to possible attacks.


Sign in / Sign up

Export Citation Format

Share Document