scholarly journals Revisiting the CompCars Dataset for Hierarchical Car Classification: New Annotations, Experiments, and Results

Sensors ◽  
2021 ◽  
Vol 21 (2) ◽  
pp. 596
Author(s):  
Marco Buzzelli ◽  
Luca Segantin

We address the task of classifying car images at multiple levels of detail, ranging from the top-level car type, down to the specific car make, model, and year. We analyze existing datasets for car classification, and identify the CompCars as an excellent starting point for our task. We show that convolutional neural networks achieve an accuracy above 90% on the finest-level classification task. This high performance, however, is scarcely representative of real-world situations, as it is evaluated on a biased training/test split. In this work, we revisit the CompCars dataset by first defining a new training/test split, which better represents real-world scenarios by setting a more realistic baseline at 61% accuracy on the new test set. We also propagate the existing (but limited) type-level annotation to the entire dataset, and we finally provide a car-tight bounding box for each image, automatically defined through an ad hoc car detector. To evaluate this revisited dataset, we design and implement three different approaches to car classification, two of which exploit the hierarchical nature of car annotations. Our experiments show that higher-level classification in terms of car type positively impacts classification at a finer grain, now reaching 70% accuracy. The achieved performance constitutes a baseline benchmark for future research, and our enriched set of annotations is made available for public download.

2015 ◽  
Vol 12 (4) ◽  
pp. 539-551
Author(s):  
Mark N. Jensen

Christian List and Philip Pettit’s new book, Group Agency: The Possibility, Design, and Status of Corporate Agents, is an interesting, timely, and extremely clever synthesis of the deliverances of their recent technical work on the philosophical, moral and legal nature of group agents. Their meticulously developed ideal group agent provides an excellent starting point for analytic reflection on group agency, identity, epistemology, and responsibility. Insofar as it is their intent for their account to have real world consequences, their model provides a template for political associations, businesses, and civil society organizations. This review essay explains List and Pettit’s model and then points out two unattractive features. First, a bird’s eye view of the conditions required to achieve ideal group agency reveals limitations that may make it impossible to realize. Second, some of these groups, especially businesses and civil society organizations, will find the model unattractive, limiting its real world applicability.


Author(s):  
Johanna K. Kaakinen

Abstract In this commentary to the Special Issue of Educational Psychology Review on visual perceptual processes, I tie the empirical studies reported in the issue with previous research in other domains to offer some points to be considered in future studies. First, I will point out to issues related to the operationalization of the theoretical constructs. The empirical papers in this Special Issue use eye tracking to study students’ engagement, teachers’ expertise, and student-teacher interaction. However, it is not always clear how the observed eye movement patterns reflect these theoretical concepts and the underlying psychological processes. Second, I will reflect on the analyses of the eye movement data presented in the papers. The main advantage of the methodology is that it can provide detailed information about the time-course of processing, and to fully engage its potential, it should be complemented with adequate statistical methods. In my view, the papers in this Special Issue provide valuable novel information about the complex processes underlying learning in variable contexts, and offer an excellent starting point for future research.


Author(s):  
Johanna Rokka ◽  
Eva Schlein ◽  
Jonas Eriksson

Abstract Introduction [11C]UCB-J is a tracer developed for PET (positron emission tomography) that has high affinity towards synaptic vesicle glycoprotein 2A (SV2A), a protein believed to participate in the regulation of neurotransmitter release in neurons and endocrine cells. The localisation of SV2A in the synaptic terminals makes it a viable target for in vivo imaging of synaptic density in the brain. Several SV2A targeting compounds have been evaluated as PET tracers, including [11C]UCB-J, with the aim to facilitate studies of synaptic density in neurological diseases. The original two-step synthesis method failed in our hands to produce sufficient amounts of [11C]UCB-J, but served as an excellent starting point for further optimizations towards a high yielding and simplified one-step method. [11C]Methyl iodide was trapped in a clear THF-water solution containing the trifluoroborate substituted precursor, potassium carbonate and palladium complex. The resulting reaction mixture was heated at 70 °C for 4 min to produce [11C]UCB-J. Results After semi-preparative HPLC purification and reformulation in 10% ethanol/phosphate buffered saline, the product was obtained in 39 ± 5% radiochemical yield based on [11C]methyl iodide, corresponding to 1.8 ± 0.5 GBq at EOS. The radiochemical purity was > 99% and the molar activity was 390 ± 180 GBq/μmol at EOS. The product solution contained < 2 ppb palladium. Conclusions A robust and high yielding production method has been developed for [11C]UCB-J, suitable for both preclinical and clinical PET applications.


2021 ◽  
Vol 21 (3) ◽  
pp. 1-17
Author(s):  
Wu Chen ◽  
Yong Yu ◽  
Keke Gai ◽  
Jiamou Liu ◽  
Kim-Kwang Raymond Choo

In existing ensemble learning algorithms (e.g., random forest), each base learner’s model needs the entire dataset for sampling and training. However, this may not be practical in many real-world applications, and it incurs additional computational costs. To achieve better efficiency, we propose a decentralized framework: Multi-Agent Ensemble. The framework leverages edge computing to facilitate ensemble learning techniques by focusing on the balancing of access restrictions (small sub-dataset) and accuracy enhancement. Specifically, network edge nodes (learners) are utilized to model classifications and predictions in our framework. Data is then distributed to multiple base learners who exchange data via an interaction mechanism to achieve improved prediction. The proposed approach relies on a training model rather than conventional centralized learning. Findings from the experimental evaluations using 20 real-world datasets suggest that Multi-Agent Ensemble outperforms other ensemble approaches in terms of accuracy even though the base learners require fewer samples (i.e., significant reduction in computation costs).


2021 ◽  
Vol 2 (3) ◽  
pp. 501-515
Author(s):  
Rajib Kumar Biswas ◽  
Farabi Bin Ahmed ◽  
Md. Ehsanul Haque ◽  
Afra Anam Provasha ◽  
Zahid Hasan ◽  
...  

Steel fibers and their aspect ratios are important parameters that have significant influence on the mechanical properties of ultrahigh-performance fiber-reinforced concrete (UHPFRC). Steel fiber dosage also significantly contributes to the initial manufacturing cost of UHPFRC. This study presents a comprehensive literature review of the effects of steel fiber percentages and aspect ratios on the setting time, workability, and mechanical properties of UHPFRC. It was evident that (1) an increase in steel fiber dosage and aspect ratio negatively impacted workability, owing to the interlocking between fibers; (2) compressive strength was positively influenced by the steel fiber dosage and aspect ratio; and (3) a faster loading rate significantly improved the mechanical properties. There were also some shortcomings in the measurement method for setting time. Lastly, this research highlights current issues for future research. The findings of the study are useful for practicing engineers to understand the distinctive characteristics of UHPFRC.


2020 ◽  
Vol 36 (S1) ◽  
pp. 37-37
Author(s):  
Americo Cicchetti ◽  
Rossella Di Bidino ◽  
Entela Xoxi ◽  
Irene Luccarini ◽  
Alessia Brigido

IntroductionDifferent value frameworks (VFs) have been proposed in order to translate available evidence on risk-benefit profiles of new treatments into Pricing & Reimbursement (P&R) decisions. However limited evidence is available on the impact of their implementation. It's relevant to distinguish among VFs proposed by scientific societies and providers, which usually are applicable to all treatments, and VFs elaborated by regulatory agencies and health technology assessment (HTA), which focused on specific therapeutic areas. Such heterogeneity in VFs has significant implications in terms of value dimension considered and criteria adopted to define or support a price decision.MethodsA literature research was conducted to identify already proposed or adopted VF for onco-hematology treatments. Both scientific and grey literature were investigated. Then, an ad hoc data collection was conducted for multiple myeloma; breast, prostate and urothelial cancer; and Non Small Cell Lung Cancer (NSCLC) therapies. Pharmaceutical products authorized by European Medicines Agency from January 2014 till December 2019 were identified. Primary sources of data were European Public Assessment Reports and P&R decision taken by the Italian Medicines Agency (AIFA) till September 2019.ResultsThe analysis allowed to define a taxonomy to distinguish categories of VF relevant to onco-hematological treatments. We identified the “real-world” VF that emerged given past P&R decisions taken at the Italian level. Data was collected both for clinical and economical outcomes/indicators, as well as decisions taken on innovativeness of therapies. Relevant differences emerge between the real world value framework and the one that should be applied given the normative framework of the Italian Health System.ConclusionsThe value framework that emerged from the analysis addressed issues of specific aspects of onco-hematological treatments which emerged during an ad hoc analysis conducted on treatment authorized in the last 5 years. The perspective adopted to elaborate the VF was the one of an HTA agency responsible for P&R decisions at a national level. Furthermore, comparing a real-world value framework with the one based on the general criteria defined by the national legislation, our analysis allowed identification of the most critical point of the current national P&R process in terms ofsustainability of current and future therapies as advance therapies and agnostic-tumor therapies.


2021 ◽  
Vol 13 (3) ◽  
pp. 1589
Author(s):  
Juan Sánchez-Fernández ◽  
Luis-Alberto Casado-Aranda ◽  
Ana-Belén Bastidas-Manzano

The limitations of self-report techniques (i.e., questionnaires or surveys) in measuring consumer response to advertising stimuli have necessitated more objective and accurate tools from the fields of neuroscience and psychology for the study of consumer behavior, resulting in the creation of consumer neuroscience. This recent marketing sub-field stems from a wide range of disciplines and applies multiple types of techniques to diverse advertising subdomains (e.g., advertising constructs, media elements, or prediction strategies). Due to its complex nature and continuous growth, this area of research calls for a clear understanding of its evolution, current scope, and potential domains in the field of advertising. Thus, this current research is among the first to apply a bibliometric approach to clarify the main research streams analyzing advertising persuasion using neuroimaging. Particularly, this paper combines a comprehensive review with performance analysis tools of 203 papers published between 1986 and 2019 in outlets indexed by the ISI Web of Science database. Our findings describe the research tools, journals, and themes that are worth considering in future research. The current study also provides an agenda for future research and therefore constitutes a starting point for advertising academics and professionals intending to use neuroimaging techniques.


2021 ◽  
Vol 13 (4) ◽  
pp. 2121 ◽  
Author(s):  
Ingrid Vigna ◽  
Angelo Besana ◽  
Elena Comino ◽  
Alessandro Pezzoli

Although increasing concern about climate change has raised awareness of the fundamental role of forest ecosystems, forests are threatened by human-induced impacts worldwide. Among them, wildfire risk is clearly the result of the interaction between human activities, ecological domains, and climate. However, a clear understanding of these interactions is still needed both at the global and local levels. Numerous studies have proven the validity of the socioecological system (SES) approach in addressing this kind of interdisciplinary issue. Therefore, a systematic review of the existing literature on the application of SES frameworks to forest ecosystems is carried out, with a specific focus on wildfire risk management. The results demonstrate the existence of different methodological approaches that can be grouped into seven main categories, which range from qualitative analysis to quantitative spatially explicit investigations. The strengths and limitations of the approaches are discussed, with a specific reference to the geographical setting of the works. The research suggests the importance of local community involvement and local knowledge consideration in wildfire risk management. This review provides a starting point for future research on forest SES and a supporting tool for the development of a sustainable wildfire risk adaptation and mitigation strategy.


Machines ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 13
Author(s):  
Yuhang Yang ◽  
Zhiqiao Dong ◽  
Yuquan Meng ◽  
Chenhui Shao

High-fidelity characterization and effective monitoring of spatial and spatiotemporal processes are crucial for high-performance quality control of many manufacturing processes and systems in the era of smart manufacturing. Although the recent development in measurement technologies has made it possible to acquire high-resolution three-dimensional (3D) surface measurement data, it is generally expensive and time-consuming to use such technologies in real-world production settings. Data-driven approaches that stem from statistics and machine learning can potentially enable intelligent, cost-effective surface measurement and thus allow manufacturers to use high-resolution surface data for better decision-making without introducing substantial production cost induced by data acquisition. Among these methods, spatial and spatiotemporal interpolation techniques can draw inferences about unmeasured locations on a surface using the measurement of other locations, thus decreasing the measurement cost and time. However, interpolation methods are very sensitive to the availability of measurement data, and their performances largely depend on the measurement scheme or the sampling design, i.e., how to allocate measurement efforts. As such, sampling design is considered to be another important field that enables intelligent surface measurement. This paper reviews and summarizes the state-of-the-art research in interpolation and sampling design for surface measurement in varied manufacturing applications. Research gaps and future research directions are also identified and can serve as a fundamental guideline to industrial practitioners and researchers for future studies in these areas.


Sign in / Sign up

Export Citation Format

Share Document