fusion mode
Recently Published Documents


TOTAL DOCUMENTS

41
(FIVE YEARS 19)

H-INDEX

10
(FIVE YEARS 0)

2022 ◽  
Vol 2148 (1) ◽  
pp. 012006
Author(s):  
Xiaoliang Yang ◽  
Honggang Zhao ◽  
Hongyuan Ren

Abstract Multi-station integration is a new form of business in the development of energy Internet and a brand new practice of power iot. It is an innovative form that strengthens the interaction between charge and storage of source network on both sides of energy supply and demand, and enhances the flexibility of power grid by taking substation, energy storage station, distributed energy station and other resources as the core. However, there is not a set of construction mode that can guide implementation, copy and popularize. Therefore, guided by the existing construction practice of multi-station fusion in China and combined with multi-user scenarios, this paper studies the multi-station fusion mode, proposes a multi-station fusion planning system based on EIST theory, gives the fusion mode under different business scenarios, and synchronously constructs a new ecological business chain with multi-station fusion as the core. It aims to make full use of the innate advantages of substation in energy flow convergence, realize the “integration of energy flow, business flow and data flow”, comprehensively support the transformation of digital power grid, and practice the national development strategy of “digital economy” and “digital China”.


Ceramics ◽  
2021 ◽  
Vol 5 (1) ◽  
pp. 24-33
Author(s):  
Vladimir G. Babashov ◽  
Sultan Kh. Suleimanov ◽  
Mikhail I. Daskovskii ◽  
Evgeny A. Shein ◽  
Yurii V. Stolyankov

Three ceramic fibrous materials of the Al2O3-SiO2 system with different densities have been treated using concentrated solar radiation. The experiment was performed using technological capabilities of the Big Solar Furnace in the 2 modes: the first mode includes heating up to 1400–1600 °C, holding for 1.5–2 h; the second mode (the fusion mode) includes heating up to 1750–1900 °C until the sample destruction, which is accompanied by fusion. Upon completion of the experiment, the phase composition, microstructure, and compressive strength of the materials were studied. It was shown that the investigated materials retained their fibrous structure under prolonged treatment in the first mode up to temperatures of 1600 °C. The phase composition of the ceramic materials changes during the experiment, and with a decrease in the density, the modification is more pronounced. Treatment of all three materials under study in the fusion mode resulted in the formation of the eutectic component in the form of spherulites. The compressive strength of the materials was found to be slightly reduced after exposure to concentrated solar radiation.


Entropy ◽  
2021 ◽  
Vol 23 (11) ◽  
pp. 1507
Author(s):  
Feiyu Zhang ◽  
Luyang Zhang ◽  
Hongxiang Chen ◽  
Jiangjian Xie

Deep convolutional neural networks (DCNNs) have achieved breakthrough performance on bird species identification using a spectrogram of bird vocalization. Aiming at the imbalance of the bird vocalization dataset, a single feature identification model (SFIM) with residual blocks and modified, weighted, cross-entropy function was proposed. To further improve the identification accuracy, two multi-channel fusion methods were built with three SFIMs. One of these fused the outputs of the feature extraction parts of three SFIMs (feature fusion mode), the other fused the outputs of the classifiers of three SFIMs (result fusion mode). The SFIMs were trained with three different kinds of spectrograms, which were calculated through short-time Fourier transform, mel-frequency cepstrum transform and chirplet transform, respectively. To overcome the shortage of the huge number of trainable model parameters, transfer learning was used in the multi-channel models. Using our own vocalization dataset as a sample set, it is found that the result fusion mode model outperforms the other proposed models, the best mean average precision (MAP) reaches 0.914. Choosing three durations of spectrograms, 100 ms, 300 ms and 500 ms for comparison, the results reveal that the 300 ms duration is the best for our own dataset. The duration is suggested to be determined based on the duration distribution of bird syllables. As for the performance with the training dataset of BirdCLEF2019, the highest classification mean average precision (cmAP) reached 0.135, which means the proposed model has certain generalization ability.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Yingshuai Wang ◽  
Dezheng Zhang ◽  
Aziguli Wulamu

Training models to predict click and order targets at the same time. For better user satisfaction and business effectiveness, multitask learning is one of the most important methods in e-commerce. Some existing researches model user representation based on historical behaviour sequence to capture user interests. It is often the case that user interests may change from their past routines. However, multi-perspective attention has broad horizon, which covers different characteristics of human reasoning, emotions, perception, attention, and memory. In this paper, we attempt to introduce the multi-perspective attention and sequence behaviour into multitask learning. Our proposed method offers better understanding of user interest and decision. To achieve more flexible parameter sharing and maintaining the special feature advantage of each task, we improve the attention mechanism at the view of expert interactive. To the best of our knowledge, we firstly propose the implicit interaction mode, the explicit hard interaction mode, the explicit soft interaction mode, and the data fusion mode in multitask learning. We do experiments on public data and lab medical data. The results show that our model consistently achieves remarkable improvements to the state-of-the-art method.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Seong J. An ◽  
Felix Rivera-Molina ◽  
Alexander Anneken ◽  
Zhiqun Xi ◽  
Brian McNellis ◽  
...  

AbstractVesicle tethers are thought to underpin the efficiency of intracellular fusion by bridging vesicles to their target membranes. However, the interplay between tethering and fusion has remained enigmatic. Here, through optogenetic control of either a natural tether—the exocyst complex—or an artificial tether, we report that tethering regulates the mode of fusion. We find that vesicles mainly undergo kiss-and-run instead of full fusion in the absence of functional exocyst. Full fusion is rescued by optogenetically restoring exocyst function, in a manner likely dependent on the stoichiometry of tether engagement with the plasma membrane. In contrast, a passive artificial tether produces mostly kissing events, suggesting that kiss-and-run is the default mode of vesicle fusion. Optogenetic control of tethering further shows that fusion mode has physiological relevance since only full fusion could trigger lamellipodial expansion. These findings demonstrate that active coupling between tethering and fusion is critical for robust membrane merger.


PLoS ONE ◽  
2021 ◽  
Vol 16 (7) ◽  
pp. e0254715
Author(s):  
Kavitha Venkataramanan ◽  
Swanandi Gawde ◽  
Amithavikram R. Hathibelagal ◽  
Shrikant R. Bharadwaj

Spot-the-difference, the popular childhood game and a prototypical change blindness task, involves identification of differences in local features of two otherwise identical scenes using an eye scanning and matching strategy. Through binocular fusion of the companion scenes, the game becomes a visual search task, wherein players can simply scan the cyclopean percept for local features that may distinctly stand-out due to binocular rivalry/lustre. Here, we had a total of 100 visually normal adult (18–28 years of age) volunteers play this game in the traditional non-fusion mode and after cross-fusion of the companion images using a hand-held mirror stereoscope. The results demonstrate that the fusion mode significantly speeds up gameplay and reduces errors, relative to the non-fusion mode, for a range of target sizes, contrasts, and chromaticity tested (all, p<0.001). Amongst the three types of local feature differences available in these images (polarity difference, presence/absence of a local feature difference and shape difference in a local feature difference), features containing polarity difference was identified as first in ~60–70% of instances in both modes of gameplay (p<0.01), with this proportion being larger in the fusion than in the non-fusion mode. The binocular fusion advantage is lost when the lustre cue is purposefully weakened through alterations in target luminance polarity. The spot-the-difference game may thus be cheated using binocular fusion and the differences readily identified through a vivid experience of binocular rivalry/lustre.


2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Liancheng Yin ◽  
Peiyi Yang ◽  
Keming Mao ◽  
Qian Liu

Remote sensing image scene classification is a hot research area for its wide applications. More recently, fusion-based methods attract much attention since they are considered to be an useful way for scene feature representation. This paper explores the fusion-based method for remote sensing image scene classification from another viewpoint. First, it is categorized as front side fusion mode, middle side fusion mode, and back side fusion mode. For each fusion mode, the related methods are introduced and described. Then, classification performances of the single side fusion mode and hybrid side fusion mode (combinations of single side fusion) are evaluated. Comprehensive experiments on UC Merced, WHU-RS19, and NWPU-RESISC45 datasets give the comparison result among various fusion methods. The performance comparisons of various modes, and interactions among different fusion modes are also discussed. It is concluded that (1) fusion is an effective way to improve model performance, (2) back side fusion is the most powerful fusion mode, and (3) method with random crop+multiple backbone+average achieves the best performance.


2021 ◽  
Vol 13 (9) ◽  
pp. 1619
Author(s):  
Bin Yan ◽  
Pan Fan ◽  
Xiaoyan Lei ◽  
Zhijie Liu ◽  
Fuzeng Yang

The apple target recognition algorithm is one of the core technologies of the apple picking robot. However, most of the existing apple detection algorithms cannot distinguish between the apples that are occluded by tree branches and occluded by other apples. The apples, grasping end-effector and mechanical picking arm of the robot are very likely to be damaged if the algorithm is directly applied to the picking robot. Based on this practical problem, in order to automatically recognize the graspable and ungraspable apples in an apple tree image, a light-weight apple targets detection method was proposed for picking robot using improved YOLOv5s. Firstly, BottleneckCSP module was improved designed to BottleneckCSP-2 module which was used to replace the BottleneckCSP module in backbone architecture of original YOLOv5s network. Secondly, SE module, which belonged to the visual attention mechanism network, was inserted to the proposed improved backbone network. Thirdly, the bonding fusion mode of feature maps, which were inputs to the target detection layer of medium size in the original YOLOv5s network, were improved. Finally, the initial anchor box size of the original network was improved. The experimental results indicated that the graspable apples, which were unoccluded or only occluded by tree leaves, and the ungraspable apples, which were occluded by tree branches or occluded by other fruits, could be identified effectively using the proposed improved network model in this study. Specifically, the recognition recall, precision, mAP and F1 were 91.48%, 83.83%, 86.75% and 87.49%, respectively. The average recognition time was 0.015 s per image. Contrasted with original YOLOv5s, YOLOv3, YOLOv4 and EfficientDet-D0 model, the mAP of the proposed improved YOLOv5s model increased by 5.05%, 14.95%, 4.74% and 6.75% respectively, the size of the model compressed by 9.29%, 94.6%, 94.8% and 15.3% respectively. The average recognition speeds per image of the proposed improved YOLOv5s model were 2.53, 1.13 and 3.53 times of EfficientDet-D0, YOLOv4 and YOLOv3 and model, respectively. The proposed method can provide technical support for the real-time accurate detection of multiple fruit targets for the apple picking robot.


Sign in / Sign up

Export Citation Format

Share Document