scholarly journals Redundant Photo-Voltaic Power Cell in a Highly Reliable System

Electronics ◽  
2021 ◽  
Vol 10 (11) ◽  
pp. 1253
Author(s):  
Bertalan Beszédes ◽  
Károly Széll ◽  
György Györök

The conversion of solar energy into electricity makes it possible to generate a power resource at the relevant location, independent of the availability of the electrical network. The application of the technology greatly facilitates the supply of electricity to objects that, due to their location, cannot be connected to the electrical network. Typical areas of use are nature reserves, game management areas, large-scale agricultural areas, large-scale livestock areas, industrial pipeline routes, water resources far from infrastructure, etc. The protection of such areas and assets and the detection of their functionality are of particular importance, sectors classified as critical infrastructure are of paramount importance. This article aims to show the conceptual structure of a possible design of a high-reliability, redundant, modular, self-monitoring, microcontroller-controlled system that can be used in the outlined areas.

Author(s):  
David Mendonça ◽  
William A. Wallace ◽  
Barbara Cutler ◽  
James Brooks

AbstractLarge-scale disasters can produce profound disruptions in the fabric of interdependent critical infrastructure systems such as water, telecommunications and electric power. The work of post-disaster infrastructure restoration typically requires information sharing and close collaboration across these sectors; yet – due to a number of factors – the means to investigate decision making phenomena associated with these activities are limited. This paper motivates and describes the design and implementation of a computer-based synthetic environment for investigating collaborative information seeking in the performance of a (simulated) infrastructure restoration task. The main contributions of this work are twofold. First, it develops a set of theoretically grounded measures of collaborative information seeking processes and embeds them within a computer-based system. Second, it suggests how these data may be organized and modeled to yield insights into information seeking processes in the performance of a complex, collaborative task. The paper concludes with a discussion of implications of this work for practice and for future research.


2021 ◽  
Author(s):  
Xinxu Shen ◽  
Troy Houser ◽  
David Victor Smith ◽  
Vishnu P. Murty

The use of naturalistic stimuli, such as narrative movies, is gaining popularity in many fields, characterizing memory, affect, and decision-making. Narrative recall paradigms are often used to capture the complexity and richness of memory for naturalistic events. However, scoring narrative recalls is time-consuming and prone to human biases. Here, we show the validity and reliability of using a natural language processing tool, the Universal Sentence Encoder (USE), to automatically score narrative recall. We compared the reliability in scoring made between two independent raters (i.e., hand-scored) and between our automated algorithm and individual raters (i.e., automated) on trial-unique, video clips of magic tricks. Study 1 showed that our automated segmentation approaches yielded high reliability and reflected measures yielded by hand-scoring, and further that the results using USE outperformed another popular natural language processing tool, GloVe. In study two, we tested whether our automated approach remained valid when testing individual’s varying on clinically-relevant dimensions that influence episodic memory, age and anxiety. We found that our automated approach was equally reliable across both age groups and anxiety groups, which shows the efficacy of our approach to assess narrative recall in large-scale individual difference analysis. In sum, these findings suggested that machine learning approaches implementing USE are a promising tool for scoring large-scale narrative recalls and perform individual difference analysis for research using naturalistic stimuli.


2021 ◽  
Author(s):  
Shuo Zhang ◽  
Shuo Shi ◽  
Tianming Feng ◽  
Xuemai Gu

Abstract Unmanned aerial vehicles (UAVs) have been widely used in communication systems due to excellent maneuverability and mobility. The ultra-high speed, ultra-low latency, and ultra-high reliability of 5th generation wireless systems (5G) have further promoted vigorous development of UAVs. Compared with traditional means of communication, UAV can provide services for ground terminal without time and space constraints, so it is often used as air base station (BS). Especially in emergency communications and rescue, it provides temporary communication signal coverage service for disaster areas. In the face of large-scale and scattered user coverage tasks, UAV's trajectory is an important factor affecting its energy consumption and communication performance. In this paper, we consider a UAV emergency communication network where UAV aims to achieve complete coverage of potential underlying D2D users (DUs). The trajectory planning problem is transformed into the deployment and connection problem of stop points (SPs). Aiming at trajectory length and sum throughput, two trajectory planning algorithms based on K-means are proposed. Due to the non-convexity of sum throughput optimization, we present a sub-optimal solution by using the successive convex approximation (SCA) method. In order to balance the relationship between trajectory length and sum throughput, we propose a joint evaluation index which is used as an objective function to further optimize trajectory. Simulation results show the validity of the proposed algorithms which have advantages over the well-known benchmark scheme in terms of trajectory length and sum throughput.


Author(s):  
V. Annapoorani ◽  
S. Sureshkumar ◽  
Srisaravanapathimurugesan ◽  
M. Manoj ◽  
K. Prabhu

The networks in future generation uses the confluence of multi-media, broadband, and broadcast services, Cognitive Radio (CR) networks are located as a preferred paradigm to bring up with spectrum functionality traumatic conditions. CRS addresses the ones troubles via dynamic spectrum access. However, the precept traumatic conditions faced through manner of manner of the CR pertain to accomplishing spectrum overall performance. At the end, spectrum overall performance improvement models based on spectrum sensing and sharing models have attracted quite a few research hobby in modern-day years, which incorporates CR mastering models, network densification architectures, and Massive Multiple Input Multiple Output (MIMO), and beamforming techniques. This paper deals with a survey of modern CR spectrum overall improvement performance models and techniques which helps ultra-high reliability with low latency communications which might be resilient to surges in web page site visitors and competition for spectrum. These models and techniques, mainly speaks about permit a big form of functionality beginning from extra superb mobiliary broadband to large-scale Internet of Things (IoT) type communications. It also provides a research correlation for many of the regular periods of a spectrum block, as well as the realistic statistics rate, the models which are used in this paper are applicable in an ultra-high frequency band. This study provides a super compare of CRs and direction for future investigations into newly identified 5G research areas, such as in business enterprise and academia.


2020 ◽  
Author(s):  
Xinhao Li ◽  
Denis Fourches

<p>Deep neural networks can directly learn from chemical structures without extensive, user-driven selection of descriptors in order to predict molecular properties/activities with high reliability. But these approaches typically require large training sets to learn the endpoint-specific structural features and ensure reasonable prediction accuracy. Even though large datasets are becoming the new normal in drug discovery, especially when it comes to high-throughput screening or metabolomics datasets, one should also consider smaller datasets with challenging endpoints to model and forecast. Thus, it would be highly relevant to better utilize the tremendous compendium of unlabeled compounds from publicly-available datasets for improving the model performances for the user’s particular series of compounds. In this study, we propose the <b>Mol</b>ecular <b>P</b>rediction <b>Mo</b>del <b>Fi</b>ne-<b>T</b>uning (<b>MolPMoFiT</b>) approach, an effective transfer learning method based on self-supervised pre-training + task-specific fine-tuning for QSPR/QSAR modeling. A large-scale molecular structure prediction model is pre-trained using one million unlabeled molecules from ChEMBL in a self-supervised learning manner, and can then be fine-tuned on various QSPR/QSAR tasks for smaller chemical datasets with specific endpoints. Herein, the method is evaluated on four benchmark datasets (lipophilicity, FreeSolv, HIV, and blood-brain barrier penetration). The results showed the method can achieve strong performances for all four datasets compared to other state-of-the-art machine learning modeling techniques reported in the literature so far. <br></p>


2022 ◽  
Vol 16 (1) ◽  
pp. 0-0

The Domain Name System - DNS is regarded as one of the critical infrastructure component of the global Internet because a large-scale DNS outage would effectively take a typical user offline. Therefore, the Internet community should ensure that critical components of the DNS ecosystem - that is, root name servers, top-level domain registrars and registries, authoritative name servers, and recursive resolvers - function smoothly. To this end, the community should monitor them periodically and provide public alerts about abnormal behavior. The authors propose a novel quantitative approach for evaluating the health of authoritative name servers – a critical, core, and a large component of the DNS ecosystem. The performance is typically measured in terms of response time, reliability, and throughput for most of the Internet components. This research work proposes a novel list of parameters specifically for determining the health of authoritative name servers: DNS attack permeability, latency comparison, and DNSSEC validation.


2020 ◽  
Author(s):  
Shimpei Uesawa ◽  
Kiyoshi Toshida ◽  
Shingo Takeuchi ◽  
Daisuke Miura

Abstract Tephra falls can disrupt critical infrastructure, including transportation and electricity networks. Probabilistic assessments of tephra fall hazards have been performed using computational techniques, but it is also important to integrate long-term, regional geological records. To assess tephra fall load hazards in Japan, we re-digitized an existing database of 551 tephra distribution maps. We used the re-digitized datasets to produce hazard curves for a range of tephra loads for various localities. We calculated annual exceedance probabilities (AEPs) and constructed hazard curves from the most complete part of the geological record. We used records of tephra fall events with a Volcanic Explosivity Index (VEI) of 4–7 (based on survivor functions) that occurred over the last 150 ka, as the database contains a very high percentage (around 90%) of VEI 4–7 events for this period. We fitted the data for this period using a Poisson distribution function. Hazard curves were constructed for the tephra fall load at 47 prefectural offices throughout Japan, and four broad regions were defined (NE–W, NE–E, W, and SW Japan). AEPs were relatively high, exceeding 1 × 10 −4 for loads greater than 0 kg/m 2 on the eastern (down-wind) side of the volcanic front in the NE–E region. In much of the W and SW regions, maximum loads were heavier, but AEPs were lower (<10 −4 ). Tephras from large (VEI ≥ 6) events are the predominant hazard in every region. A parametric analysis was applied to investigate regional variability using AEP diagrams and slope shape parameters via curve fitting with exponential and double-exponential decay functions. Two major differences were recognized between the hazard curves from borehole data and those from the digitized tephra database. The first is a significant underestimation of AEP for frequent events using the tephra database, by one to two orders of magnitude. This is explained in terms of the lack of records for smaller tephra fall events in the database. The second is an overestimation of the heaviest tephra load events, which differ by a factor of two to three. This difference might be due to the tephra fall distribution contour interpolation methodology used to generate the original database. The hazard curve for Tokyo developed in this study differs from those that have been generated previously using computational techniques. For the Tokyo region, the probabilities and tephra loads produced by computational methods are at least one order of magnitude greater than those generated during the present study. These discrepancies are inferred to have been caused by initial parameter settings in the computational simulations, including their incorporation of large-scale eruptions of up to VEI = 7 for all large stratovolcanoes, regardless of their eruptive histories. To improve the precision of the digital database, we plan to incorporate recent (since 2003) tephra distributions, revise questionable isopach maps, and develop an improved interpolation method for digitizing tephra fall distributions.


Author(s):  
W. Treurniet

Given its nature, a crisis has a significant community impact. This applies in particular to emergencies: crises that arise quickly. Because of the complex and multifaceted nature of large-scale incidents, the response requires coordinated effort by multiple organizations. This networked collaboration is not solely restricted to professional organizations. In responding to an incident, the affected community can itself be an important source of information and capabilities. This chapter discusses how one can shape a trustworthy and decisive response organization in which relevant and useful capacities available in the community are incorporated. This discussion has two focal points. The first focal point is the role of the affected community in the case of an emergency. On the one hand, an emergency affects the fabric of the community, such as the critical infrastructure. On the other, a community has inherent internal resources that give it resilience and capacity to respond in a crisis. This needs to be reflected in the choice of emergency response planning model. The second focal point is the structure of the emergency response network. An emergency response network is a mixed-sector network. This means that coordination is needed among organizations and collectives with differing strategic orientations.


Author(s):  
Paolo Donati ◽  
Tania Pomili ◽  
Luca Boselli ◽  
Pier P. Pompa

Early diagnostics and point-of-care (POC) devices can save people’s lives or drastically improve their quality. In particular, millions of diabetic patients worldwide benefit from POC devices for frequent self-monitoring of blood glucose. Yet, this still involves invasive sampling processes, which are quite discomforting for frequent measurements, or implantable devices dedicated to selected chronic patients, thus precluding large-scale monitoring of the globally increasing diabetic disorders. Here, we report a non-invasive colorimetric sensing platform to identify hyperglycemia from saliva. We designed plasmonic multibranched gold nanostructures, able to rapidly change their shape and color (naked-eye detection) in the presence of hyperglycemic conditions. This “reshaping approach” provides a fast visual response and high sensitivity, overcoming common detection issues related to signal (color intensity) losses and bio-matrix interferences. Notably, optimal performances of the assay were achieved in real biological samples, where the biomolecular environment was found to play a key role. Finally, we developed a dipstick prototype as a rapid home-testing kit.


Author(s):  
Fantu Bachewe ◽  
Bart Minten ◽  
Alemayehu Seyoum Taffesse ◽  
Karl Pauw ◽  
Alethia Cameron ◽  
...  

Abstract While storage losses at the farm are often assumed to be an important contributor to presumed large postharvest losses in developing countries, reliable and representative data on these losses are often lacking. We study farmers’ storage decisions and self-reported storage losses for grains based on two large-scale household surveys conducted in major agricultural areas in Ethiopia. We show that a relatively large share of grain production is stored by farm households for own consumption and that storage technologies are rudimentary. Farmers’ self-reported storage losses amount to an average of 4 % of all grains stored and 2 % of total harvest. These storage losses differ significantly by socioeconomic variables and wealth, as well as by crop and humidity. We further see strong spatial heterogeneity in storage losses being significantly higher in southwest Ethiopia. Efforts to scale up the adoption of improved storage technologies to reduce storage losses at the farm level should consider these characteristics.


Sign in / Sign up

Export Citation Format

Share Document