hybrid architecture
Recently Published Documents


TOTAL DOCUMENTS

789
(FIVE YEARS 236)

H-INDEX

29
(FIVE YEARS 8)

2022 ◽  
Vol 2022 ◽  
pp. 1-14
Author(s):  
Yue Liu ◽  
Junqi Ma ◽  
Xingzhen Tao ◽  
Jingyun Liao ◽  
Tao Wang ◽  
...  

In the era of digital manufacturing, huge amount of image data generated by manufacturing systems cannot be instantly handled to obtain valuable information due to the limitations (e.g., time) of traditional techniques of image processing. In this paper, we propose a novel self-supervised self-attention learning framework—TriLFrame for image representation learning. The TriLFrame is based on the hybrid architecture of Convolutional Network and Transformer. Experiments show that TriLFrame outperforms state-of-the-art self-supervised methods on the ImageNet dataset and achieves competitive performances when transferring learned features on ImageNet to other classification tasks. Moreover, TriLFrame verifies the proposed hybrid architecture, which combines the powerful local convolutional operation and the long-range nonlocal self-attention operation and works effectively in image representation learning tasks.


Author(s):  
Clement Nartey ◽  
Eric Tutu Tchao ◽  
James Dzisi Gadze ◽  
Bright Yeboah-Akowuah ◽  
Henry Nunoo-Mensah ◽  
...  

AbstractThe integration of Internet of Things devices onto the Blockchain implies an increase in the transactions that occur on the Blockchain, thus increasing the storage requirements. A solution approach is to leverage cloud resources for storing blocks within the chain. The paper, therefore, proposes two solutions to this problem. The first being an improved hybrid architecture design which uses containerization to create a side chain on a fog node for the devices connected to it and an Advanced Time-variant Multi-objective Particle Swarm Optimization Algorithm (AT-MOPSO) for determining the optimal number of blocks that should be transferred to the cloud for storage. This algorithm uses time-variant weights for the velocity of the particle swarm optimization and the non-dominated sorting and mutation schemes from NSGA-III. The proposed algorithm was compared with results from the original MOPSO algorithm, the Strength Pareto Evolutionary Algorithm (SPEA-II), and the Pareto Envelope-based Selection Algorithm with region-based selection (PESA-II), and NSGA-III. The proposed AT-MOPSO showed better results than the aforementioned MOPSO algorithms in cloud storage cost and query probability optimization. Importantly, AT-MOPSO achieved 52% energy efficiency compared to NSGA-III. To show how this algorithm can be applied to a real-world Blockchain system, the BISS industrial Blockchain architecture was adapted and modified to show how the AT-MOPSO can be used with existing Blockchain systems and the benefits it provides.


2022 ◽  
Vol 571 ◽  
pp. 151384
Author(s):  
Zonglin Liu ◽  
Baoqiang Li ◽  
Yujie Feng ◽  
Dechang Jia ◽  
Caicai Li ◽  
...  

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 301
Author(s):  
Rym Chéour ◽  
Mohamed Wassim Jmal ◽  
Sabrine Khriji ◽  
Dhouha El Houssaini ◽  
Carlo Trigona ◽  
...  

Wireless Sensor Networks (WSNs) are prone to highly constrained resources, as a result ensuring the proper functioning of the network is a requirement. Therefore, an effective WSN management system has to be integrated for the network efficiency. Our objective is to model, design, and propose a homogeneous WSN hybrid architecture. This work features a dedicated power utilization optimization strategy specifically for WSNs application. It is entitled Hybrid Energy-Efficient Power manager Scheduling (HEEPS). The pillars of this strategy are based on the one hand on time-out Dynamic Power Management (DPM) Intertask and on the other hand on Dynamic Voltage and Frequency Scaling (DVFS). All tasks are scheduled under Global Earliest Deadline First (GEDF) with new scheduling tests to overcome the Dhall effect. To minimize the energy consumption, the HEEPS predicts, defines and models the behavior adapted to each sensor node, as well as the associated energy management mechanism. HEEPS’s performance evaluation and analysis are performed using the STORM simulator. A comparison to the results obtained with the various state of the art approaches is presented. Results show that the power manager proposed effectively schedules tasks to use dynamically the available energy estimated gain up to 50%.


Author(s):  
Jibran Rasheed Khan ◽  
Shariq Mahmood Khan ◽  
Farhan Ahmed Siddiqui

Background: The last few decades bring an astonishing revolution in technology and electronics which enabling small pieces of electronic devices into handy equipment, called sensors. The sensors enable 75% area of the world covered by water. Which is hardly 5% been explored and has numerous applications. The security of underwater wireless sensors network (UWNS) communication is a prime concern to protect advantages from technology and application purpose. This paper explores UWSN architecture, vulnerabilities, attacks, and possible factors that challenge UWSN security and its applications. Objectives: The primary objective of this work is to analyze the vulnerable factors that cause security challenges and threats to UWSN applications. This study focuses on the intermediate uplink point of UWSN architecture and evaluates it in three different test cases. This would be beneficial to build a better solution by devising an appropriate scheme in the future. Method: The denial of service (DoS) attack is simulated using ns-3 and Aquasim-ng simulator to determine which factor(s) threatening to the UWSN environment. The simulation is performed under three idealized underwater scenarios; 1) depicts general UWSN (a hybrid architecture), 2) a special case depicts UWSN environment with only underwater components, ands 3) depicts another special case with underwater sink UWSN environment. Assuming all three test case environments are vulnerable and threats to UWSN security. Result: In all three scenarios, the average network performance in the normal transmission is 88% and about ± 3% deviation is observed. Also, it observed that scenarios 1 and 2 are influenced by the adversary interference or malicious activity while there are no such effects that occur in scenario 3 in the absence of intermediate radio link or surface sink node(s). Thus, experiments found that among others, the intermediate radio link(s) of the onshore surface sink(s) or surface buoy(s) are vulnerable and threats to UWSN. Conclusion: The simulation results and observations found that the intermediate up-link in UWSN architecture found to be more vulnerable which makes it insecure. While, in a pure underwater environment, seem to be more secure compared to the general UWSN environment. In the future, more factors will be evaluate in the same or different cases to determine the UWSN issues and other vulnerable factors


2021 ◽  
Author(s):  
Akram Hadeed

Recently, technology scaling has enabled the placement of an increasing number of cores, in the form of chip-multiprocessors (CMPs) on a chip and continually shrinking transistor sizes to improve performance. In this context, power consumption has become the main constraint in designing CMPs. As a result, uncore components power consumption taking increasing portion from the on-chip power budget; therefore, designing power management techniques, particularly memory and network-on-chip (NoC) systems, has become an important issue to solve. Consequently, a considerable attention has been directed toward power management based on CMPs components, particularly shared caches and uncore interconnected structures, to overcome the challenges of limited chip power budget.<div>This work targets to design an energy-efficient uncore architecture by using heterogeneity in components (cache cells) and operational parameters (Voltage/Frequency). In order to ensure the minimum impact on the system performance, a run-time approach is investigated to assess the proposed method. An architecture is proposed where the cache layer contains the heterogenous cache banks in all placed in one frequency voltage domain. Average memory access time (AMAT) was selected as a network monitor to monitor the performance on the run-time. The appropriate size and type of the last level cache (LLC) and Voltage/Frequency for the uncore domain is adjusted according to the calculated AMAT which indicates the system demand from the uncore.<br></div><div>The proposed hybrid architecture was implemented, investigated and compared with the a baseline model where only SRAM banks were used in the last level cache. Experimental results on the Princeton Application Repository for Shared-Memory Computers (PARSEC) benchmark suit,show that the proposed architecture yields up to a 40% reduction in overall chip energy-delay product with a marginal performance degradation in average of -1.2% below the baseline one. The best energy saving was 55% and the worse degradation was only 15%.<br></div>


2021 ◽  
Author(s):  
Akram Hadeed

Recently, technology scaling has enabled the placement of an increasing number of cores, in the form of chip-multiprocessors (CMPs) on a chip and continually shrinking transistor sizes to improve performance. In this context, power consumption has become the main constraint in designing CMPs. As a result, uncore components power consumption taking increasing portion from the on-chip power budget; therefore, designing power management techniques, particularly memory and network-on-chip (NoC) systems, has become an important issue to solve. Consequently, a considerable attention has been directed toward power management based on CMPs components, particularly shared caches and uncore interconnected structures, to overcome the challenges of limited chip power budget.<div>This work targets to design an energy-efficient uncore architecture by using heterogeneity in components (cache cells) and operational parameters (Voltage/Frequency). In order to ensure the minimum impact on the system performance, a run-time approach is investigated to assess the proposed method. An architecture is proposed where the cache layer contains the heterogenous cache banks in all placed in one frequency voltage domain. Average memory access time (AMAT) was selected as a network monitor to monitor the performance on the run-time. The appropriate size and type of the last level cache (LLC) and Voltage/Frequency for the uncore domain is adjusted according to the calculated AMAT which indicates the system demand from the uncore.<br></div><div>The proposed hybrid architecture was implemented, investigated and compared with the a baseline model where only SRAM banks were used in the last level cache. Experimental results on the Princeton Application Repository for Shared-Memory Computers (PARSEC) benchmark suit,show that the proposed architecture yields up to a 40% reduction in overall chip energy-delay product with a marginal performance degradation in average of -1.2% below the baseline one. The best energy saving was 55% and the worse degradation was only 15%.<br></div>


Mathematics ◽  
2021 ◽  
Vol 9 (24) ◽  
pp. 3326
Author(s):  
Noman Khan ◽  
Ijaz Ul Haq ◽  
Fath U Min Ullah ◽  
Samee Ullah Khan ◽  
Mi Young Lee

Traditional power generating technologies rely on fossil fuels, which contribute to worldwide environmental issues such as global warming and climate change. As a result, renewable energy sources (RESs) are used for power generation where battery energy storage systems (BESSs) are widely used to store electrical energy for backup, match power consumption and generation during peak hours, and promote energy efficiency in a pollution-free environment. Accurate battery state of health (SOH) prediction is critical because it plays a key role in ensuring battery safety, lowering maintenance costs, and reducing BESS inconsistencies. The precise power consumption forecasting is critical for preventing power shortage and oversupply, and the complicated physicochemical features of batteries dilapidation cannot be directly acquired. Therefore, in this paper, a novel hybrid architecture called ‘CL-Net’ based on convolutional long short-term memory (ConvLSTM) and long short-term memory (LSTM) is proposed for multi-step SOH and power consumption forecasting. First, battery SOH and power consumption-related raw data are collected and passed through a preprocessing step for data cleansing. Second, the processed data are fed into ConvLSTM layers, which extract spatiotemporal features and form their encoded maps. Third, LSTM layers are used to decode the encoded features and pass them to fully connected layers for final multi-step forecasting. Finally, a comprehensive ablation study is conducted on several combinations of sequential learning models using three different time series datasets, i.e., national aeronautics and space administration (NASA) battery, individual household electric power consumption (IHEPC), and domestic energy management system (DEMS). The proposed CL-Net architecture reduces root mean squared error (RMSE) up to 0.13 and 0.0052 on the NASA battery and IHEPC datasets, respectively, compared to the state-of-the-arts. These experimental results show that the proposed architecture can provide robust and accurate SOH and power consumption forecasting compared to the state-of-the-art.


Sign in / Sign up

Export Citation Format

Share Document