selected subset
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 38)

H-INDEX

13
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Rohan Ghuge ◽  
Joseph Kwon ◽  
Viswanath Nagarajan ◽  
Adetee Sharma

Assortment optimization involves selecting a subset of products to offer to customers in order to maximize revenue. Often, the selected subset must also satisfy some constraints, such as capacity or space usage. Two key aspects in assortment optimization are (1) modeling customer behavior and (2) computing optimal or near-optimal assortments efficiently. The paired combinatorial logit (PCL) model is a generic customer choice model that allows for arbitrary correlations in the utilities of different products. The PCL model has greater modeling power than other choice models, such as multinomial-logit and nested-logit. In “Constrained Assortment Optimization Under the Paired Combinatorial Logit Model,” Ghuge, Kwon, Nagarajan, and and Sharma provide efficient algorithms that find provably near-optimal solutions for PCL assortment optimization under several types of constraints. These include the basic unconstrained problem (which is already intractable to solve exactly), multidimensional space constraints, and partition constraints. The authors also demonstrate via extensive experiments that their algorithms typically achieve over 95% of the optimal revenues.


2021 ◽  
Vol 923 (2) ◽  
pp. 157
Author(s):  
Abigail J. Lee ◽  
Wendy L. Freedman ◽  
Barry F. Madore ◽  
Kayla A. Owens ◽  
In Sung Jang

Abstract The recently developed J-region asymptotic giant branch (JAGB) method has extraordinary potential as an extragalactic standard candle, capable of calibrating the absolute magnitudes of locally accessible Type Ia supernovae, thereby leading to an independent determination of the Hubble constant. Using Gaia Early Data Release 3 (EDR3) parallaxes, we calibrate the zero-point of the JAGB method, based on the mean luminosity of a color-selected subset of carbon-rich AGB stars. We identify Galactic carbon stars from the literature and use their near-infrared photometry and Gaia EDR3 parallaxes to measure their absolute J-band magnitudes. Based on these Milky Way parallaxes we determine the zero-point of the JAGB method to be M J = −6.14 ± 0.05 (stat) ± 0.11 (sys) mag. This Galactic calibration serves as a consistency check on the JAGB zero-point, agreeing well with previously published, independent JAGB calibrations based on geometric, detached eclipsing binary distances to the LMC and SMC. However, the JAGB stars used in this study suffer from the high parallax uncertainties that afflict the bright and red stars in EDR3, so we are not able to attain the higher precision of previous calibrations, and ultimately will rely on future improved DR4 and DR5 releases.


2021 ◽  
Author(s):  
Vinayak Kerbaji More ◽  
Ashish Jakhetiya ◽  
Arun Pandey ◽  
Tarang Patel

Abstract Adenoid cystic carcinoma (ACC) is a rare and aggressive variant of salivary gland neoplasm. Perineural invasion and resistance to present chemotherapeutic drugs makes treatment more challenging. Surgery remains the treatment of choice in resectable cases with postoperative radiotherapy in selected subset. In upfront technically unresectable cases neoadjuvant chemotherapy (NACT) can be used as an option to achieve R0 resection. Here we present a case of minor salivary gland ACC that was successfully down staged and underwent R0 resection after NACT.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shaghayegh Sadeghiyan ◽  
Farhad Hosseinzadeh Lotfi ◽  
Behrouz Daneshian ◽  
Nima Azarmir Shotobani

Purpose Project selection management is a matter of challenge for project-oriented organizations, particularly, if the decision-makers are confronted with limited resources. One of the main concerns is selecting an optimal subset that can successfully satisfy the requirements of the organization providing enough resources to the best subset of the project. The projects for which there are not enough resources or those requiring whole resources of the organization will collapse soon after failed to success. Therefore, the issue is in the risk of choosing a set of projects so that can make a balance in investment versus on collective benefit. Design/methodology/approach A model is presented for project selection and has been tested on the 37 available projects. This model could increase the efficiency of the whole subset of the project significantly in comparison to the other model and it was because of choosing a diverse subset of projects. Findings Provides a general framework for project selection and a diverse and balanced subset of projects to increase the efficiency of the selected subset. Also, reduces the impact of uncertainty risk on the project selection process. Research limitations/implications For the purposes of project selection, any project whose results are uncertain is a risky project because, if the project fails, it will reduce combined project value. For example, a pharmaceutical company’s R&D project is affected by the uncertain results of a specific compound. If the company invests in different compounds, a failure with one will be offset by a good result on another. Therefore, with selecting a diverse set of projects, this paper will have a different set of risks. Originality/value This paper discusses the risk of selecting or being responsible for selecting a project under uncertainty. Most of the projects in the field of project selection generally consider the risks facing the projects or existing models that do not take into account the risk.


2021 ◽  
Author(s):  
Pia Marincek ◽  
Natascha D. Wagner ◽  
Salvatore Tomasello

Herbaria harbor a tremendous amount of plant specimens that are rarely used for plant systematic studies. The main reason is the difficulty to extract a decent quantity of good quality DNA from the preserved plant material. While the extraction of ancient DNA in animals is well established, studies including old plant material are still underrepresented. In our study we compared the standard Qiagen DNeasy Plant Mini Kit and a specific PTB-DTT protocol on to two different plant genera (Xanthium L. and Salix L.).  The included herbarium material covered about two centuries of plant collections. A selected subset of samples was used for a standard library preparation as well as a target enrichment approach. The results revealed that PTB-PTT resulted in higher quantity and quality regarding DNA yield. Despite the lower overall yield of DNA, the Qiagen Kit resulted in better sequencing results regarding the number of filtered and mapped reads. We were able to successfully sequence a sample from 1820 and conclude that it is possible to include old herbarium specimens in NGS approaches. This opens a treasure box in phylogenomic research.


Author(s):  
Dušan Prodanović ◽  
Nemanja Branisavljević

Abstract This chapter covers the main aspects of data archiving, as the last phase of data handling in the process of urban drainage and stormwater management metrology. Data archiving is the process of preparing and storing the data for future use, usually not executed by the personnel who acquired the data. A data archive (also known as a data repository) can be defined as storage of a selected subset of raw, processed, validated and resampled data, with descriptions and other meta-data, linked to simulation results, if there are any. A data archive should be equipped with tools for search and data extraction along with procedures for data management, in order to maintain the database quality for an extended period of time. It is recommended, mostly for security reasons, to separate (both in a physical and in a digital sense) the archive database from the working database. This chapter provides the reader with relevant information about the most important issues related to data archive design, the archiving process and data characteristics regarding archiving. Also, the importance of good and comprehensive meta-data is underlined throughout the chapter. The management of a data archive is evaluated with a special focus on predicting future resources needed to keep the archive updated, secure, available, and in compliance with legal demands and limitations. At the end, a set of recommendations for creating and maintaining a data archive in the scope of urban drainage is given.


Open Biology ◽  
2021 ◽  
Vol 11 (8) ◽  
pp. 210098
Author(s):  
R. W. Meek ◽  
I. T. Cadby ◽  
A. L. Lovering

Glycolysis and gluconeogenesis are central pathways of metabolism across all domains of life. A prominent enzyme in these pathways is phosphoglucose isomerase (PGI), which mediates the interconversion of glucose-6-phosphate and fructose-6-phosphate. The predatory bacterium Bdellovibrio bacteriovorus leads a complex life cycle, switching between intraperiplasmic replicative and extracellular ‘hunter’ attack-phase stages. Passage through this complex life cycle involves different metabolic states. Here we present the unliganded and substrate-bound structures of the B. bacteriovorus PGI, solved to 1.74 Å and 1.67 Å, respectively. These structures reveal that an induced-fit conformational change within the active site is not a prerequisite for the binding of substrates in some PGIs. Crucially, we suggest a phenylalanine residue, conserved across most PGI enzymes but substituted for glycine in B. bacteriovorus and other select organisms, is central to the induced-fit mode of substrate recognition for PGIs. This enzyme also represents the smallest conventional PGI characterized to date and probably represents the minimal requirements for a functional PGI.


2021 ◽  
Author(s):  
◽  
Juan Carlos Vizcarra ◽  
Erik A Burlingame ◽  
Clemens B Hug ◽  
Yury Goltsev ◽  
...  

Emerging multiplexed imaging platforms provide an unprecedented view of an increasing number of molecular markers at subcellular resolution and the dynamic evolution of tumor cellular composition. As such, they are capable of elucidating cell-to-cell interactions within the tumor microenvironment that impact clinical outcome and therapeutic response. However, the rapid development of these platforms has far outpaced the computational methods for processing and analyzing the data they generate. While being technologically disparate, all imaging assays share many computational requirements for post-collection data processing. We convened a workshop to characterize these shared computational challenges and a follow-up hackathon to implement solutions for a selected subset of them. Here, we delineate these areas that reflect major axes of research within the field, including image registration, segmentation of cells and subcellular structures, and identification of cell types from their morphology. We further describe the logistical organization of these events, believing our lessons learned can aid others in uniting the imaging community around self-identified topics of mutual interest, in designing and implementing operational procedures to address those topics and in mitigating issues inherent in image analysis (e.g., sharing exemplar images of large datasets and disseminating baseline solutions to hackathon challenges through open-source code repositories).


2021 ◽  
Vol 13 (15) ◽  
pp. 2909
Author(s):  
Chuanpeng Zhao ◽  
Cheng-Zhi Qin

Accurate large-area mangrove classification is a challenging task due to the complexity of mangroves, such as abundant species within the mangrove category, and various appearances resulting from a large latitudinal span and varied habitats. Existing studies have improved mangrove classifications by introducing time series images, constructing new indices sensitive to mangroves, and correcting classifications by empirical constraints and visual inspections. However, false positive misclassifications are still prevalent in current classification results before corrections, and the key reason for false positive misclassification in large-area mangrove classifications is unknown. To address this knowledge gap, a hypothesis that an inadequate classification scheme (i.e., the choice of categories) is the key reason for such false positive misclassification is proposed in this paper. To validate this hypothesis, new categories considering non-mangrove vegetation near water (i.e., within one pixel from water bodies) were introduced, which is inclined to be misclassified as mangroves, into a normally-used standard classification scheme, so as to form a new scheme. In controlled conditions, two experiments were conducted. The first experiment using the same total features to derive direct mangrove classification results in China for the year 2018 on the Google Earth Engine with the standard scheme and the new scheme respectively. The second experiment used the optimal features to balance the probability of a selected feature to be effective for the scheme. A comparison shows that the inclusion of the new categories reduced the false positive pixels with a rate of 71.3% in the first experiment, and a rate of 66.3% in the second experiment. Local characteristics of false positive pixels within 1 × 1 km cells, and direct classification results in two selected subset areas were also analyzed for quantitative and qualitative validation. All the validation results from the two experiments support the finding that the hypothesis is true. The validated hypothesis can be easily applied to other studies to alleviate the prevalence of false positive misclassifications.


2021 ◽  
Vol 14 (11) ◽  
pp. 2397-2409
Author(s):  
Ziyun Wei ◽  
Immanuel Trummer ◽  
Connor Anderson

Recently proposed voice query interfaces translate voice input into SQL queries. Unreliable speech recognition on top of the intrinsic challenges of text-to-SQL translation makes it hard to reliably interpret user input. We present MUVE (Multiplots for Voice quEries), a system for robust voice querying. MUVE reduces the impact of ambiguous voice queries by filling the screen with multiplots, capturing results of phonetically similar queries. It maps voice input to a probability distribution over query candidates, executes a selected subset of queries, and visualizes their results in a multiplot. Our goal is to maximize probability to show the correct query result. Also, we want to optimize the visualization (e.g., by coloring a subset of likely results) in order to minimize expected time until users find the correct result. Via a user study, we validate a simple cost model estimating the latter overhead. The resulting optimization problem is NP-hard. We propose an exhaustive algorithm, based on integer programming, as well as a greedy heuristic. As shown in a corresponding user study, MUVE enables users to identify accurate results faster, compared to prior work.


Sign in / Sign up

Export Citation Format

Share Document