labeling scheme
Recently Published Documents


TOTAL DOCUMENTS

172
(FIVE YEARS 33)

H-INDEX

17
(FIVE YEARS 2)

2021 ◽  
Vol 9 ◽  
Author(s):  
Rui Zhao ◽  
Dingye Wu ◽  
Junke Zhang

Carbon labeling scheme as a quantitative measure on carbon emissions of product or service, can be applied to leading low carbon consumption and production, which is also a powerful tool to achieve carbon neutral. The policy brief reviews the progress of carbon labelling scheme to provide insight into its future perspectives on carbon neutrality in China. The results show that: ① China has not officially fostered as a carbon labeling system, but there is a pilot attempt to electric appliance; ② Publics’ perception towards carbon labeling scheme is in a lower level; ③ There is a room for improvement on the existing carbon labeling scheme, to improve its transparency and comparison.


Mathematics ◽  
2021 ◽  
Vol 9 (21) ◽  
pp. 2710
Author(s):  
Martin Bača ◽  
Muhammad Imran ◽  
Andrea Semaničová-Feňovčíková

It is easily observed that the vertices of a simple graph cannot have pairwise distinct degrees. This means that no simple graph of the order of at least two is, in this way, irregular. However, a multigraph can be irregular. Chartrand et al., in 1988, posed the following problem: in a loopless multigraph, how can one determine the fewest parallel edges required to ensure that all vertices have distinct degrees? This problem is known as the graph labeling problem and, for its solution, Chartrand et al. introduced irregular assignments. The irregularity strength of a graph G is known as the maximal edge label used in an irregular assignment, minimized over all irregular assignments. Thus, the irregularity strength of a simple graph G is equal to the smallest maximum multiplicity of an edge of G in order to create an irregular multigraph from G. In the present paper, we show the existence of a required irregular labeling scheme that proves the exact value of the irregularity strength of wheels. Then, we modify this irregular mapping in six cases and obtain labelings that determine the exact value of the modular irregularity strength of wheels as a natural modification of the irregularity strength.


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 907
Author(s):  
Su-Cheng Haw ◽  
Aisyah Amin ◽  
Chee-Onn Wong ◽  
Samini Subramaniam

Background: As the standard for the exchange of data over the World Wide Web, it is important to ensure that the eXtensible Markup Language (XML) database is capable of supporting not only efficient query processing but also capable of enduring frequent data update operations over the dynamic changes of Web content. Most of the existing XML annotation is based on a labeling scheme to identify each hierarchical position of the XML nodes. This computation is costly as any updates will cause the whole XML tree to be re-labelled. This impact can be observed on large datasets. Therefore, a robust labeling scheme that avoids re-labeling is crucial. Method: Here, we present ORD-GAP (named after Order Gap), a robust and persistent XML labeling scheme that supports dynamic updates. ORD-GAP assigns unique identifiers with gaps in-between XML nodes, which could easily identify the level, Parent-Child (P-C), Ancestor-Descendant (A-D) and sibling relationship. ORD-GAP adopts the OrdPath labeling scheme for any future insertion. Results: We demonstrate that ORD-GAP is robust enough for dynamic updates, and have implemented it in three use cases: (i) left-most, (ii) in-between and (iii) right-most insertion. Experimental evaluations on DBLP dataset demonstrated that ORD-GAP outperformed existing approaches such as ORDPath and ME Labeling concerning database storage size, data loading time and query retrieval. On average, ORD-GAP has the best storing and query retrieval time. Conclusion: The main contributions of this paper are: (i) A robust labeling scheme named ORD-GAP that assigns certain gap between each node to support future insertion, and (ii) An efficient mapping scheme, which built upon ORD-GAP labeling scheme to transform XML into RDB effectively.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Issam H. Laradji ◽  
Alzayat Saleh ◽  
Pau Rodriguez ◽  
Derek Nowrouzezahrai ◽  
Mostafa Rahimi Azghadi ◽  
...  

AbstractEstimating fish body measurements like length, width, and mass has received considerable research due to its potential in boosting productivity in marine and aquaculture applications. Some methods are based on manual collection of these measurements using tools like a ruler which is time consuming and labour intensive. Others rely on fully-supervised segmentation models to automatically acquire these measurements but require collecting per-pixel labels which are also time consuming. It can take up to 2 minutes per fish to acquire accurate segmentation labels. To address this problem, we propose a segmentation model that can efficiently train on images labeled with point-level supervision, where each fish is annotated with a single click. This labeling scheme takes an average of only 1 second per fish. Our model uses a fully convolutional neural network with one branch that outputs per-pixel scores and another that outputs an affinity matrix. These two outputs are aggregated using a random walk to get the final, refined per-pixel output. The whole model is trained end-to-end using the localization-based counting fully convolutional neural network (LCFCN) loss and thus we call our method Affinity-LCFCN (A-LCFCN). We conduct experiments on the DeepFish dataset, which contains several fish habitats from north-eastern Australia. The results show that A-LCFCN outperforms a fully-supervised segmentation model when the annotation budget is fixed. They also show that A-LCFCN achieves better segmentation results than LCFCN and a standard baseline.


2021 ◽  
Vol 13 (15) ◽  
pp. 8433
Author(s):  
Fatima Lambarraa-Lehnhardt ◽  
Rico Ihle ◽  
Hajar Elyoubi

The Green Moroccan Plan (GMP) is a national long-term strategy launched by the Moroccan government to support the agricultural sector as the main driver of social and economic development. The GMP involves a labeling strategy based on geographical indications, aimed at protecting and promoting the marketing of locally produced food specialties and linking their specific qualities and reputations to their domestic production region. We evaluated the success of this policy by comparing consumers’ attitudes and preferences toward a local product having a geographical indication label to one without. We conducted a survey of 500 consumers in main Moroccan cities. The potential consumer set for the local product was found to be segmented, indicating the potential for a domestic niche of environmentally aware consumers preferring organically and sustainably produced food. We applied the analytical hierarchy process to prioritize the attributes of the commodities of interest, which underscores the importance of the origin when choosing a local product without origin labeling; for the labeled product, intrinsic quality attributes are considered to be more important. These findings demonstrate the limited promotion of the established origin labeling in the domestic market. Hence, we recommend that the Moroccan government reinforce the labeling scheme with an organic label to increase the market potential of the environmentally aware consumers by ensuring sustainable production of local products.


2021 ◽  
Vol 24 (4) ◽  
pp. 1-35
Author(s):  
Aleieldin Salem ◽  
Sebastian Banescu ◽  
Alexander Pretschner

The malware analysis and detection research community relies on the online platform VirusTotal to label Android apps based on the scan results of around 60 antiviral scanners. Unfortunately, there are no standards on how to best interpret the scan results acquired from VirusTotal, which leads to the utilization of different threshold-based labeling strategies (e.g., if 10 or more scanners deem an app malicious, it is considered malicious). While some of the utilized thresholds may be able to accurately approximate the ground truths of apps, the fact that VirusTotal changes the set and versions of the scanners it uses makes such thresholds unsustainable over time. We implemented a method, Maat , that tackles these issues of standardization and sustainability by automatically generating a Machine Learning ( ML )-based labeling scheme, which outperforms threshold-based labeling strategies. Using the VirusTotal scan reports of 53K Android apps that span 1 year, we evaluated the applicability of Maat ’s Machine Learning ( ML )-based labeling strategies by comparing their performance against threshold-based strategies. We found that such ML -based strategies (a) can accurately and consistently label apps based on their VirusTotal scan reports, and (b) contribute to training ML -based detection methods that are more effective at classifying out-of-sample apps than their threshold-based counterparts.


2021 ◽  
Vol 9 ◽  
Author(s):  
Thomas Martin ◽  
Ross Meyer ◽  
Zane Jobe

Machine-learning algorithms have been used by geoscientists to infer geologic and physical properties from hydrocarbon exploration and development wells for more than 40 years. These techniques historically utilize digital well-log information, which, like any remotely sensed measurement, have resolution limitations. Core is the only subsurface data that is true to geologic scale and heterogeneity. However, core description and analysis are time-intensive, and therefore most core data are not utilized to their full potential. Quadrant 204 on the United Kingdom Continental Shelf has publicly available open-source core and well log data. This study utilizes this dataset and machine-learning models to predict lithology and facies at the centimeter scale. We selected 12 wells from the Q204 region with well-log and core data from the Schiehallion, Foinaven, Loyal, and Alligin hydrocarbon fields. We interpreted training data from 659 m of core at the sub-centimeter scale, utilizing a lithology-based labeling scheme (five classes) and a depositional-process-based facies labeling scheme (six classes). Utilizing a “color-channel-log” (CCL) that summarizes the core image at each depth interval, our best performing trained model predicts the correct lithology with 69% accuracy (i.e., the predicted lithology output from the model is the same as the interpreted lithology) and predicts individual lithology classes of sandstone and mudstone with over 80% accuracy. The CCL data require less compute power than core image data and generate more accurate results. While the process-based facies labels better characterize turbidites and hybrid-event-bed stratigraphy, the machine-learning based predictions were not as accurate as compared to lithology. In all cases, the standard well-log data cannot accurately predict lithology or facies at the centimeter level. The machine-learning workflow developed for this study can unlock warehouses full of high-resolution data in a multitude of geological settings. The workflow can be applied to other geographic areas and deposit types where large quantities of photographed core material are available. This research establishes an open-source, python-based machine-learning workflow to analyze open-source core image data in a scalable, reproducible way. We anticipate that this study will serve as a baseline for future research and analysis of borehole and core data.


Symmetry ◽  
2021 ◽  
Vol 13 (6) ◽  
pp. 1072
Author(s):  
Arun Kumar Rai ◽  
Neeraj Kumar ◽  
Rajeev Kumar ◽  
Hari Om ◽  
Satish Chand ◽  
...  

In this paper, a high capacity reversible data hiding technique using a parametric binary tree labeling scheme is proposed. The proposed parametric binary tree labeling scheme is used to label a plaintext image’s pixels as two different categories, regular pixels and irregular pixels, through a symmetric or asymmetric process. Regular pixels are only utilized for secret payload embedding whereas irregular pixels are not utilized. The proposed technique efficiently exploits intra-block correlation, based on the prediction mean of the block by symmetry or asymmetry. Further, the proposed method utilizes blocks that are selected for their pixel correlation rather than exploiting all the blocks for secret payload embedding. In addition, the proposed scheme enhances the encryption performance by employing standard encryption techniques, unlike other block based reversible data hiding in encrypted images. Experimental results show that the proposed technique maximizes the embedding rate in comparison to state-of-the-art reversible data hiding in encrypted images, while preserving privacy of the original contents.


2021 ◽  
Vol 13 (6) ◽  
pp. 142
Author(s):  
Matthew Spradling ◽  
Jeremy Straub ◽  
Jay Strong

So-called ‘fake news’—deceptive online content that attempts to manipulate readers—is a growing problem. A tool of intelligence agencies, scammers and marketers alike, it has been blamed for election interference, public confusion and other issues in the United States and beyond. This problem is made particularly pronounced as younger generations choose social media sources over journalistic sources for their information. This paper considers the prospective solution of providing consumers with ‘nutrition facts’-style information for online content. To this end, it reviews prior work in product labeling and considers several possible approaches and the arguments for and against such labels. Based on this analysis, a case is made for the need for a nutrition facts-based labeling scheme for online content.


Author(s):  
Igor Tkachov

The paper presents the results of a theoretical study related to the development of methods for constructing generating structures based on labeling schemes for generating sets of complex structural objects. In a theoretical aspect, generated objects are mappings of sets of objects into a set of labels, and in practical terms, they can be, in particular, visual images. The scientific and practical interest in generative constructions is that they can be used to determine whether objects belong to a certain class, that is, to solve the problem of pattern recognition. The problem of constructing generating labeling scheme belongs to a wide section of modern applied informatics that embraces Constraint Satisfaction Problem and related themes [1–4]. But this problem has not been posed before and there are still no regular methods for solving it. The analysis of the above methods is based on the formalism of the consistent labeling problem [6, 10, 11], which is, on the one hand, a generalization of many statements of discrete problems of Constraint Satisfaction, and, on the other hand, a transparent theoretical construction with a well-developed mathematical foundation. The problem of constructing a relational scheme (in this case, labeling scheme) that generates a given set of mappings, by analogy with linguistic models, may be named “the problem of grammar restoration” [12–14]. In previous studies it was shown that to solve this problem it makes sense to use equivalent transformations of the labeling scheme [11]. This is because the source table listing all the complex objects that should be generated by the target scheme is itself a trivial variant of the scheme with a given set of consistent labelings. This means that the source scheme and target scheme are equivalent. However, one of the equivalent operations – disunion of a column – cannot be used regularly, since it requires certain conditions to be met regarding the internal structure of the column. In this case, to expand the capabilities of four known equivalent transformations of the labeling scheme – deleting and appending nonexistent labeling, as well as joining of columns and column disunion – a non-equivalent transformation was added – "coloring the column labelings". The purpose of the paper is to introduce and investigate operation of "coloring the column labelings" that leads to a non-equivalent transformation of a labeling scheme. Show the advisability of using the known equivalent and the introduced quasi-equivalent transformations of the labeling scheme to solve the problem of constructing generating structures based on labeling schemes. Results. The transformation of the labeling scheme, called "coloring the labelings of the scheme column", has been introduced. It is shown that its implementation leads to a quasi-equivalent labeling scheme, by solving which it is possible to uniquely restore the solution of the original problem. A method is proposed for using the newly introduced operation to transform the labeling scheme into a quasi-equivalent labeling scheme, in which it becomes possible to regularly perform the column decoupling operation. This ability of the operation of "coloring the column labelings" opens the way to the creation of a method for solving the problem of restoring a labeling scheme that generates a given set of consistent labelings. Keywords: relational scheme, consistent labeling scheme, equivalent labeling scheme transformations, constraint satisfaction problem.


Sign in / Sign up

Export Citation Format

Share Document