Transparent Augmented Black-Litterman Allocation: Simple and Unified Framework for Strategy Combination, Factor Mimicking, Hedging, and Stock-Specific Alphas

Author(s):  
Wing Cheung
2020 ◽  
Vol 71 (7) ◽  
pp. 868-880
Author(s):  
Nguyen Hong-Quan ◽  
Nguyen Thuy-Binh ◽  
Tran Duc-Long ◽  
Le Thi-Lan

Along with the strong development of camera networks, a video analysis system has been become more and more popular and has been applied in various practical applications. In this paper, we focus on person re-identification (person ReID) task that is a crucial step of video analysis systems. The purpose of person ReID is to associate multiple images of a given person when moving in a non-overlapping camera network. Many efforts have been made to person ReID. However, most of studies on person ReID only deal with well-alignment bounding boxes which are detected manually and considered as the perfect inputs for person ReID. In fact, when building a fully automated person ReID system the quality of the two previous steps that are person detection and tracking may have a strong effect on the person ReID performance. The contribution of this paper are two-folds. First, a unified framework for person ReID based on deep learning models is proposed. In this framework, the coupling of a deep neural network for person detection and a deep-learning-based tracking method is used. Besides, features extracted from an improved ResNet architecture are proposed for person representation to achieve a higher ReID accuracy. Second, our self-built dataset is introduced and employed for evaluation of all three steps in the fully automated person ReID framework.


2019 ◽  
Author(s):  
Chi-Yun Lin ◽  
Matthew Romei ◽  
Luke Oltrogge ◽  
Irimpan Mathews ◽  
Steven Boxer

Green fluorescent protein (GFPs) have become indispensable imaging and optogenetic tools. Their absorption and emission properties can be optimized for specific applications. Currently, no unified framework exists to comprehensively describe these photophysical properties, namely the absorption maxima, emission maxima, Stokes shifts, vibronic progressions, extinction coefficients, Stark tuning rates, and spontaneous emission rates, especially one that includes the effects of the protein environment. In this work, we study the correlations among these properties from systematically tuned GFP environmental mutants and chromophore variants. Correlation plots reveal monotonic trends, suggesting all these properties are governed by one underlying factor dependent on the chromophore's environment. By treating the anionic GFP chromophore as a mixed-valence compound existing as a superposition of two resonance forms, we argue that this underlying factor is defined as the difference in energy between the two forms, or the driving force, which is tuned by the environment. We then introduce a Marcus-Hush model with the bond length alternation vibrational mode, treating the GFP absorption band as an intervalence charge transfer band. This model explains all the observed strong correlations among photophysical properties; related subtopics are extensively discussed in Supporting Information. Finally, we demonstrate the model's predictive power by utilizing the additivity of the driving force. The model described here elucidates the role of the protein environment in modulating photophysical properties of the chromophore, providing insights and limitations for designing new GFPs with desired phenotypes. We argue this model should also be generally applicable to both biological and non-biological polymethine dyes.<br>


Author(s):  
Wei Huang ◽  
Xiaoshu Zhou ◽  
Mingchao Dong ◽  
Huaiyu Xu

AbstractRobust and high-performance visual multi-object tracking is a big challenge in computer vision, especially in a drone scenario. In this paper, an online Multi-Object Tracking (MOT) approach in the UAV system is proposed to handle small target detections and class imbalance challenges, which integrates the merits of deep high-resolution representation network and data association method in a unified framework. Specifically, while applying tracking-by-detection architecture to our tracking framework, a Hierarchical Deep High-resolution network (HDHNet) is proposed, which encourages the model to handle different types and scales of targets, and extract more effective and comprehensive features during online learning. After that, the extracted features are fed into different prediction networks for interesting targets recognition. Besides, an adjustable fusion loss function is proposed by combining focal loss and GIoU loss to solve the problems of class imbalance and hard samples. During the tracking process, these detection results are applied to an improved DeepSORT MOT algorithm in each frame, which is available to make full use of the target appearance features to match one by one on a practical basis. The experimental results on the VisDrone2019 MOT benchmark show that the proposed UAV MOT system achieves the highest accuracy and the best robustness compared with state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document