scholarly journals Almost Unsupervised Learning for Dense Crowd Counting

Author(s):  
Deepak Babu Sam ◽  
Neeraj N Sajjan ◽  
Himanshu Maurya ◽  
R. Venkatesh Babu

We present an unsupervised learning method for dense crowd count estimation. Marred by large variability in appearance of people and extreme overlap in crowds, enumerating people proves to be a difficult task even for humans. This implies creating large-scale annotated crowd data is expensive and directly takes a toll on the performance of existing CNN based counting models on account of small datasets. Motivated by these challenges, we develop Grid Winner-Take-All (GWTA) autoencoder to learn several layers of useful filters from unlabeled crowd images. Our GWTA approach divides a convolution layer spatially into a grid of cells. Within each cell, only the maximally activated neuron is allowed to update the filter. Almost 99.9% of the parameters of the proposed model are trained without any labeled data while the rest 0.1% are tuned with supervision. The model achieves superior results compared to other unsupervised methods and stays reasonably close to the accuracy of supervised baseline. Furthermore, we present comparisons and analyses regarding the quality of learned features across various models.

Author(s):  
A. V. Ponomarev

Introduction: Large-scale human-computer systems involving people of various skills and motivation into the information processing process are currently used in a wide spectrum of applications. An acute problem in such systems is assessing the expected quality of each contributor; for example, in order to penalize incompetent or inaccurate ones and to promote diligent ones.Purpose: To develop a method of assessing the expected contributor’s quality in community tagging systems. This method should only use generally unreliable and incomplete information provided by contributors (with ground truth tags unknown).Results:A mathematical model is proposed for community image tagging (including the model of a contributor), along with a method of assessing the expected contributor’s quality. The method is based on comparing tag sets provided by different contributors for the same images, being a modification of pairwise comparison method with preference relation replaced by a special domination characteristic. Expected contributors’ quality is evaluated as a positive eigenvector of a pairwise domination characteristic matrix. Community tagging simulation has confirmed that the proposed method allows you to adequately estimate the expected quality of community tagging system contributors (provided that the contributors' behavior fits the proposed model).Practical relevance: The obtained results can be used in the development of systems based on coordinated efforts of community (primarily, community tagging systems). 


2022 ◽  
Vol 31 (1) ◽  
pp. 1-37
Author(s):  
Chao Liu ◽  
Xin Xia ◽  
David Lo ◽  
Zhiwe Liu ◽  
Ahmed E. Hassan ◽  
...  

To accelerate software development, developers frequently search and reuse existing code snippets from a large-scale codebase, e.g., GitHub. Over the years, researchers proposed many information retrieval (IR)-based models for code search, but they fail to connect the semantic gap between query and code. An early successful deep learning (DL)-based model DeepCS solved this issue by learning the relationship between pairs of code methods and corresponding natural language descriptions. Two major advantages of DeepCS are the capability of understanding irrelevant/noisy keywords and capturing sequential relationships between words in query and code. In this article, we proposed an IR-based model CodeMatcher that inherits the advantages of DeepCS (i.e., the capability of understanding the sequential semantics in important query words), while it can leverage the indexing technique in the IR-based model to accelerate the search response time substantially. CodeMatcher first collects metadata for query words to identify irrelevant/noisy ones, then iteratively performs fuzzy search with important query words on the codebase that is indexed by the Elasticsearch tool and finally reranks a set of returned candidate code according to how the tokens in the candidate code snippet sequentially matched the important words in a query. We verified its effectiveness on a large-scale codebase with ~41K repositories. Experimental results showed that CodeMatcher achieves an MRR (a widely used accuracy measure for code search) of 0.60, outperforming DeepCS, CodeHow, and UNIF by 82%, 62%, and 46%, respectively. Our proposed model is over 1.2K times faster than DeepCS. Moreover, CodeMatcher outperforms two existing online search engines (GitHub and Google search) by 46% and 33%, respectively, in terms of MRR. We also observed that: fusing the advantages of IR-based and DL-based models is promising; improving the quality of method naming helps code search, since method name plays an important role in connecting query and code.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1359
Author(s):  
Kaleel Mahmood ◽  
Deniz Gurevin ◽  
Marten van van Dijk ◽  
Phuoung Ha Nguyen

Many defenses have recently been proposed at venues like NIPS, ICML, ICLR and CVPR. These defenses are mainly focused on mitigating white-box attacks. They do not properly examine black-box attacks. In this paper, we expand upon the analyses of these defenses to include adaptive black-box adversaries. Our evaluation is done on nine defenses including Barrage of Random Transforms, ComDefend, Ensemble Diversity, Feature Distillation, The Odds are Odd, Error Correcting Codes, Distribution Classifier Defense, K-Winner Take All and Buffer Zones. Our investigation is done using two black-box adversarial models and six widely studied adversarial attacks for CIFAR-10 and Fashion-MNIST datasets. Our analyses show most recent defenses (7 out of 9) provide only marginal improvements in security (<25%), as compared to undefended networks. For every defense, we also show the relationship between the amount of data the adversary has at their disposal, and the effectiveness of adaptive black-box attacks. Overall, our results paint a clear picture: defenses need both thorough white-box and black-box analyses to be considered secure. We provide this large scale study and analyses to motivate the field to move towards the development of more robust black-box defenses.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Xiaoying Tan ◽  
Yuchun Guo ◽  
Mehmet A. Orgun ◽  
Liyin Xue ◽  
Yishuai Chen

With the surging demand on high-quality mobile video services and the unabated development of new network technology, including fog computing, there is a need for a generalized quality of user experience (QoE) model that could provide insight for various network optimization designs. A good QoE, especially when measured as engagement, is an important optimization goal for investors and advertisers. Therefore, many works have focused on understanding how the factors, especially quality of service (QoS) factors, impact user engagement. However, the divergence of user interest is usually ignored or deliberatively decoupled from QoS and/or other objective factors. With an increasing trend towards personalization applications, it is necessary as well as feasible to consider user interest to satisfy aesthetic and personal needs of users when optimizing user engagement. We first propose an Extraction-Inference (E-I) algorithm to estimate the user interest from easily obtained user behaviors. Based on our empirical analysis on a large-scale dataset, we then build a QoS and user Interest based Engagement (QI-E) regression model. Through experiments on our dataset, we demonstrate that the proposed model reaches an improvement in accuracy by 9.99% over the baseline model which only considers QoS factors. The proposed model has potential for designing QoE-oriented scheduling strategies in various network scenarios, especially in the fog computing context.


2014 ◽  
Vol 104 (5) ◽  
pp. 523-527 ◽  
Author(s):  
Daron Acemoglu ◽  
David Laibson ◽  
John A. List

Internet-based educational resources are proliferating rapidly. One concern associated with these (potentially transformative) technological changes is that they will be disequalizing—as many technologies of the last several decades have been—creating superstar teachers and a winner-take-all education system. These important concerns notwithstanding, we contend that a major impact of web-based educational technologies will be the democratization of education: educational resources will be more equally distributed, and lower-skilled teachers will benefit. At the root of our results is the observation that skilled lecturers can only exploit their comparative advantage if other teachers complement those lectures with face-to-face instruction. This complementarity will increase the quantity and quality of face-to-face teaching services, potentially increasing the marginal product and wages of lower-skill teachers.


2021 ◽  
Vol 14 (1) ◽  
Author(s):  
E. J. Huijbrechts ◽  
J. Dekker ◽  
M. Tenten-Diepenmaat ◽  
M. Gerritsen ◽  
M. van der Leeden

Abstract Background Foot and ankle problems are common in rheumatic disorders and often lead to pain and limitations in functioning, affecting quality of life. There appears to be large variability in the management of foot problems in rheumatic disorders across podiatrists. To increase uniformity and quality of podiatry care for rheumatoid arthritis (RA), osteoarthritis (OA), spondyloarthritis (SpA), and gout a clinical protocol has been developed. Research objectives [1] to evaluate an educational programme to train podiatrists in the use of the protocol and [2] to explore barriers and facilitators for the use of the protocol in daily practice. Method This study used a mixed method design and included 32 podiatrists in the Netherlands. An educational programme was developed and provided to train the podiatrists in the use of the protocol. They thereafter received a digital questionnaire to evaluate the educational programme. Subsequently, podiatrists used the protocol for three months in their practice. Facilitators and barriers that they experienced in the use of the protocol were determined by a questionnaire. Semi-structured interviews were held to get more in-depth understanding. Results The mean satisfaction with the educational programme was 7.6 (SD 1.11), on a 11 point scale. Practical knowledge on joint palpation, programme variation and the use of practice cases were valued most. The protocol appeared to provide support in the diagnosis, treatment and evaluation of foot problems in rheumatic disorders and the treatment recommendations were clear and understandable. The main barrier for use of the protocol was time. The protocol has not yet been implemented in the electronic patient file, which makes it more time consuming. Other experienced barriers were the reimbursement for the treatment and financial compensation. Conclusions The educational programme concerning the clinical protocol for foot problems in rheumatic disorders appears to be helpful for podiatrists. Podiatrists perceived the protocol as being supportive during patient management. Barriers for use of the protocol were identified and should be addressed prior to large scale implementation. Whether the protocol is also beneficial for patients, needs to be determined in future research.


Author(s):  
A. Babirad

Cerebrovascular diseases are a problem of the world today, and according to the forecast, the problem of the near future arises. The main risk factors for the development of ischemic disorders of the cerebral circulation include oblique and aging, arterial hypertension, smoking, diabetes mellitus and heart disease. An effective strategy for the prevention of cerebrovascular events is based on the implementation of large-scale risk control measures, including the use of antiagregant and anticoagulant therapy, invasive interventions such as atheromectomy, angioplasty and stenting. In this connection, the efforts of neurologists, cardiologists, angiosurgery, endocrinologists and other specialists are the basis for achieving an acceptable clinical outcome. A review of the SF-36 method for assessing the quality of life in patients with the effects of transient ischemic stroke is presented. The assessment of quality of life is recognized in world medical practice and research, an indicator that is also used to assess the quality of the health system and in general sociological research.


Sign in / Sign up

Export Citation Format

Share Document